
Cybersecurity Education in the
Age of AI and Automation & Ambiguity
Seattle University
November 12th to 14th
29th Colloquium
November 12-14, 2025
Cybersecurity Education in the Age of AI and Automation & Ambiguity
The 2025 Colloquium for Information Systems Security Education (CISSE) celebrates its 29th anniversary as the premier Cybersecurity Education Conference. Join us to explore the latest trends in cyber education and engage with subject matter experts from academia, government, and industry. We are delighted to welcome Seattle University as our host institution, joined by the City University of Seattle as a co-host—both serving as esteemed academic partners for this year’s event..
Subscribe for updates
Questions?
Please direct any questions regarding papers, registration, participation, and sponsorship to Andrew at abelon@thecolloquium.org.
Registration
To foster meaningful dialogue, professional networking, and hands-on collaboration, this year's Colloquium emphasizes physical presence and active engagement. We believe that the most impactful exchange of ideas—particularly in an era defined by AI, automation, and ambiguity—happens face to face. Workshops, poster sessions, and panels are designed to maximize participation, allowing attendees to connect directly with speakers, researchers, and peers in the cybersecurity education community.
November 12-14, 2025
Cybersecurity Education in the Age of AI and Automation & Ambiguity
In-person (Seattle, WA)
Registration for in-person attendance is managed through our portal. Please note that changes, including transfers or cancellations, are allowed until October 31, 2025.
Bulk Registration
For bulk registration inquiries, please contact Andrew at abelon@thecolloquium.org. Be sure to include a billing contact email, along with the names and email addresses of prospective attendees.
Cancellation
In-person: Withdrawals without penalty are allowed until October 31, 2025. After this date, a $50 administrative fee will apply to all cancellations.
Questions?
Please direct any questions regarding papers, registration, participation, and sponsorship to Andrew at abelon@thecolloquium.org.
Agenda
Below is the preliminary agenda for the 29th Colloquium. This schedule will be updated regularly as we approach the event in November. If you need to submit or update your presentation materials, headshot, or bio, please contact Andrew via email.
| Time | Event |
|---|
Paper Presentations
Below are the abstracts and submitted slides for paper presentations at the 29th Colloquium, organized alphabetically. Presenting authors who wish to submit or update their presentation materials, headshots, or bios should contact Andrew by email. Special thanks to all authors and volunteers for your time and dedication.
All Sessions
Speakers & Authors
We are excited to feature SMEs and students from academia, government, and industry at the 29th Colloquium. To ensure optimal visibility for attendees, we ask that all participants submit their presentation materials, bios, and headshots to Andrew for inclusion in the program. A special thanks to our esteemed speakers and volunteers for your time and contributions.
Denise Kinsey
Michael Whitman
Seattle, Washington
A city where innovation meets natural beauty! Nestled between the shimmering waters of Puget Sound and the majestic peaks of the Cascade Mountains, Seattle is a dynamic city that blends cutting-edge technology with a rich cultural heritage. Known as the “Emerald City” for its lush evergreen surroundings, Seattle offers breathtaking views, world-class attractions, and a thriving culinary and arts scene. Whether exploring its iconic skyline or discovering hidden gems in its eclectic neighborhoods, visitors are sure to find inspiration at every turn.
Downtown Seattle highlights include:
- Stroll through the iconic Pike Place Market and sample fresh seafood and artisan goods.
- Take in sweeping city views from the Space Needle or explore immersive exhibits at Chihuly Garden and Glass.
- Cruise along Elliott Bay or hop a ferry for a quintessential Pacific Northwest experience.
- Explore diverse dining — from award-winning restaurants and craft breweries to international flavors and farm-to-table cuisine.
- Enjoy vibrant nightlife, live music, and theater across the city's many venues.
- Find tranquility in nearby parks, including Discovery Park, with trails and vistas of Puget Sound.
Silver Cloud Hotel - Seattle Broadway
1100 Broadway (map)
(800) 590-1801
0.4 miles
Experience the heart of Seattle for just $139 per night!
Lodging
For those who prefer accommodations within walking distance, we recommend booking at the Silver Cloud Hotel Seattle – Broadway to take advantage of our special group rate. Otherwise, we encourage that you book your preferred area hotel as soon as possible.
Silver Cloud Hotel - Seattle Broadway
1100 Broadway (map)
(800) 590-1801
0.4 miles
Hotel Sorrento
900 Madison St. (map)
(800) 426-1265
0.5 miles
Inn at Virginia Mason
1006 Spring St. (map)
(800) 283-6453
0.7 miles
Renaissance Seattle Hotel
515 Madison St. (map)
(206) 583-0300
0.7 miles
Crowne Plaza Seattle - Downtown
1113 6th Ave. (map)
(877) 227-6963
0.8 miles
Coast Seattle Downtown Hotel
1301 6th Ave. (map)
(800) 716-6199
0.9 miles
Inn at the WAC
1325 6th Ave. (map)
(206) 622-7900
0.9 miles
Kimpton Hotel Monaco Seattle
1101 4th Ave. (map)
(855) 546-7866
0.9 miles
Sheraton Grand Seattle
1400 6th Ave. (map)
(206) 621-9000
1.0 mile
Seattle University
901 12th Avenue
Seattle, WA 98122
(Campus Map)
Plan Your Visit
Seattle University, located in the heart of the city's vibrant Capitol Hill district, provides the perfect setting. The campus combines historic charm with modern facilities, all within walking distance of some of Seattle’s best dining, cultural landmarks, and green spaces.
Sponsors & Partners
Seattle University
Seattle University, located in the heart of the Pacific Northwest's thriving technology corridor, is a nationally recognized independent university known for its academic excellence, social justice leadership, and commitment to educating the whole person. With strong programs across science, engineering, business, law, and the humanities, the university fosters interdisciplinary collaboration and civic engagement. Its central location in Seattle provides students and faculty with meaningful connections to industry, government, and nonprofit sectors, making it a vibrant hub for innovation and public service.
City University of Seattle
As a private nonprofit institution of higher education, City University of Seattle's mission is to change lives for good by offering high quality and relevant lifelong education to anyone with the desire to learn. CityU's vision is to be the destination of choice for accessible, career-focused education with a focus on equipping a diverse student population with 21st century skills and technology tools.
Codio
At Codio, we fuse computing education research with AI to deliver learning experiences that truly build job-ready skills. By combining evidence-based pedagogy, intelligent technology, and immersive design, we create a new standard of “better tech skills learning.” The results speak for themselves: higher completion rates, increased time on task, stronger grade attainment, and learners who graduate with the confidence and capabilities today's workforce demands.
Master of Cybersecurity and Leadership - University of Washington, Tacoma
The Master of Cybersecurity and Leadership (MCL) program at the University of Washington Tacoma equips professionals and military personnel with technical backgrounds to enhance their leadership and cybersecurity skills for career advancement. By integrating resources from the School of Engineering & Technology and the Milgard School of Business, the program fosters innovative solutions for information assurance and cybersecurity challenges, positioning graduates for success and entrepreneurial opportunities in Washington's cybersecurity landscape.
Silver Cloud Hotel Seattle – Broadway
Discover the perfect location for your stay in Seattle at the Silver Cloud Hotel Seattle – Broadway. Situated in the vibrant Capitol Hill neighborhood, our hotel offers convenient access to Seattle University, Swedish Medical Center, and downtown Seattle, where you can explore the city's top-notch dining, shopping, and entertainment options. Don't forget to make a stop at the lively Pike Place Market during your visit.
29th Colloquium
CISSE™ offers a distinctive platform for showcasing your organization with precision, targeting not just cybersecurity enthusiasts, but the educators in cybersecurity. For 29 years, the esteemed members of CISSE™, including those deeply invested in educational methodologies, have convened to unravel the complexities of teaching emerging subjects. Place your tools and resources in the hands of these distinguished individuals and demonstrate how you can bolster their mission.
Volunteers
On behalf of the organizing team, we extend our sincere thanks to all the volunteers who generously contributed their time and expertise to the 29th Colloquium. Your dedication and hard work have been instrumental in shaping this year’s event, and your contributions are vital to maintaining the quality and relevance of The Colloquium™. We deeply appreciate your continued support and commitment.
Ashutosh Agarwal
Stevens Institute of Technology
Chirag Agrawal
IEEE Member
Vaibhav Agrawal
Jackie Armstrong
Hill College
Gaurab Baral
Northern Kentucky University
Jane Blanken-Webb
Wilkes University
Benjamin Bradley
Kratos Defense and Security Solutions
Ingrid Buckley
Florida Gulf Coast University
Prashanth Busi Reddy Gari
University of North Carolina
Brian Callahan
Monmouth University
Eric Chan-Tin
Loyola University Chicago
Ankur Chattopadhyay
Northern Kentucky University
Tom Chothia
University of Birmingham
Christopher Collins
Nova Southeastern University
Maeve Dion
University of New Hampshire
Eamon Doherty
Fairleigh Dickinson University
Alfreda Dudley
Towson University
Veronica Elze
City University of Seattle
Eric Eskelsen
Idaho State University
Antonio Espinoza
Eastern Washington University
Steven Furnell
University of Nottingham
Ankit Gupta
Researcher
Robert Honomichl
University of Arizona
Yen-Hung Frank Hu
Norfolk State University
Stephen Huang
University of Houston
Juyeon Jo
University of Nevada, Las Vegas
Amanda Joyce
Argonne National Laboratory
Nnanna Kalu-Mba
United Nations
Siddharth Kaza
Towson University
Yoohwan Kim
University of Nevada, Las Vegas
Lin Li
Prairie View A&M University
Larry Liu
Morgan State University
Joseph Lozada
Marymount University
Christine Lumen
New York University
Herbert Mattord
Kennesaw State University
Sean McBride
Idaho State University
Akshay Mittal
University of the Cumberlands
Denis Nicole
University of Southampton
Sandra Nite
Texas A&M University
Venkat Laxmi Sateesh Nutulapati
Researcher
Bernardo Palazzi
Brown University
Yin Pan
Rochester Institute of Technology
Ajai Paul
Affirm Inc.
Matt Plass
Lewis University
Weihao Qu
Monmouth university
Pavan Reddy
George Washington University
Chris Rondeau
Bossier Parish Community College
Ivo Rosa
EDP
Thierry Sans
University of Toronto Scarborough
Gregory Simco
Nova Southeastern University
Jill Slay
University of South Australia
Meera Sridhar
University of North Carolina, Charlotte
Krista Stacey
University of South Alabama
Stuart Steiner
Eastern Washington University
Ryan Straight
University of Arizona
Sara Sutton
Grand Valley State University
James Tippey
Excelsior University
Shambhu Upadhyaya
SUNY at Buffalo
Udaya Veeramreddygari
IEEE Member
Vivek Venkatesan
The Vanguard Group
Hsiaoan Wang
Northeastern University
Weichao Wang
University of North Carolina, Charlotte
Carol Woody
Software Engineering Institute
Guang Yang
University of California, Berkeley
Vaibhav Agrawal
Andrew Belón
CISSE
William Butler
Capitol Technology University
Andrew Hurd
Empire State University
Kendra Evans
CODIO
Denise Kinsey
Franklin University
Dan Likarish
Regis University
Erik Moore
Seattle University
Michael Whitman
Kennesaw State University
Morgan Zantua
City University of Seattle
Andrew Belón
CISSE
Alexander Kent
Denise Kinsey
Franklin University
Dan Likarish
Regis University
Erik Moore
Seattle University
Vaibhav Agrawal
Adonnis Alexander
Franklin University
Andrew Belón
CISSE
Mohit Chandarana
CODIO
Artem Protsenko
Bard College
Sarah Zerpa
University of Tampa
A case study for combating student overuse of Generative Artificial Intelligence in Cybersecurity educational activities using Augmented Reality Capture-the-Flag development
- Shoshana Sugerman, Sanya Joseph, Quinn Colognato, Mary Cotrupi, Aanya Mehta, Tanvi Mehta, Emily Goldman, Ishneet Kaur, Victoria Cai, Gabriel Bezerra, Adam Kaplan, Arielle Revis, Lala Liu, Samuel Leung, Elif Kulahlioglu, Rachel Schneider, Mikah Schueller, Quinn Sharp, James Porvaznik, Brian Callahan
- Session 01 - November 12th @ 10:20 AM
Cybersecurity Capture-the-Flag (CTF) tournaments are well-understood to teach skills necessary for success in today's cybersecurity field. However, that does not mean CTFs are without critique. In the age of Generative Artificial Intelligence (AI), particularly for large-scale CTF tournaments, the use of Generative AI may be permitted or even encouraged to match the reality of today's practitioners, who are using AI systems to protect data, systems, and people. In such a situation, CTF participants must themselves balance the use of Generative AI with its overuse--effectively self-police the draw to offload one's thinking to the machine in pursuit of correct answers and prizes. In this paper, we introduce a framework for combating the overuse of Generative AI in cybersecurity educational activities through the building our own CTF using Augmented Reality (AR) technologies. Written by the nineteen undergraduate students who developed the CTF along with our professor who supervised our work, we argue that using the pedagogic lenses of the "see one, do one, teach one" model and peer learning allowed us to reinterpret our efforts on our CTF into a vision of shared labor and shared responsibility. This reframing of our own understanding of our work effectively acted as a counterbalance, keeping us focused on using Generative AI as a tool and not a crutch, leading to improved educational outcomes for us as individuals and the group as a whole. We hope that documenting our experiences inspires others to adopt similar counterbalance techniques where and when appropriate.
Keywords: augmented reality, capture-the-flag, cybersecurity, pedagogy, "see one do one teach one" model, peer learning
Deepfake-Enabled Infiltration: The Threat of Synthetic Identities in Corporate Environments
- Joseph Lozada
- Session 01 - November 12th @ 10:40 AM
This paper explores the evolving threat of deepfakes in the context of insider threats, particularly how advanced persistent threats (APTs) are leveraging AI-generated audio and video to impersonate job applicants and gain access to sensitive systems. While deepfakes have legitimate applications in entertainment, education, and business, they are increasingly being weaponized for deception and cyber intrusion. The paper outlines recent incidents, assesses technical vulnerabilities, and evaluates current risk management frameworks such as NIST RMF. It concludes with policy and technology recommendations to enhance detection and prevention strategies, especially during remote hiring and onboarding processes.
Keywords: deepfake, insider threat, artificial intelligence, AI, machine learning, ML, cybersecurity
From Creation to Detection: How Dataset Properties Impact Deepfake Model Performance
- Lauren Matthews, Idongesit Mkpong-Ruffin, Deidre Evans, Chutima Boonthum-Denecke
- Session 01 - November 12th @ 11:00 AM
Deepfakes are sophisticated, AI-generated alterations of images and videos that pose significant threats to cybersecurity, particularly with face-swapping techniques that can deceive and spread misinformation. A major limitation in current deepfake detection strategies is that the deepfakes used to train these models are often lower quality than those encountered in real-world scenarios [1]. This weakens model performance when tested against more sophisticated media alterations.
To bridge this gap, deepfake detection datasets must evolve to include high-quality deepfakes that better reflect the real-world threats. This study examines popular datasets such as Celeb-DF and DF-1.0, which revealed that despite efforts toward attribute variability, these datasets often lack demographic and facial variability. Even more important, many datasets do not publish attribute annotations, preventing researchers from fully understanding the imbalances present in their training data. Without dataset transparency, models may be unknowingly trained on skewed data, limiting their ability to generalize effectively.
The study presents an experiment using FaceSwap, a widely used deepfake creation tool, to investigate how dataset composition affects deepfake generation and detection. The experiment is dual-focused: (1) analyzing the impact of dataset augmentation through horizontal mirroring, which aims to increase facial orientation variability and improve model performance, and (2) evaluate how FaceSwap performs on subjects with different attributes to identify inherent skews in the deepfake generation process.
Keywords: Deepfake, Data Bias, AI Cybersecurity
Study of AI Object Detection: Patterns on Animals with YOLO and Adversarial Patches
- Aniya Hopson, Chutima Boonthum-Denecke, Idongesit Mkpong-Ruffin
- Session 01 - November 12th @ 11:20 AM
Artificial Intelligence (AI) has become an increasingly powerful tool in various domains, particularly in image classification and object detection. As AI advances, novel methods to deceive machine learning models, such as adversarial patches, have emerged. These subtle modifications to images can lead to misclassification of objects, posing a substantial challenge to their reliability. In this paper, we present our research findings and literature on adversarial examples and object detection. This research builds upon the previous work by investigating the impact of small patches on object detection using YOLOv8. We started by exploring patterns within images and their influence on model accuracy. Then a follow-up study evaluating how adversarial patches, particularly those targeting animal patterns, affect YOLOv8’s ability to accurately detect objects. Additionally, we explore how untrained patterns impact the model’s performance, aiming to identify vulnerabilities and enhance the robustness of object detection systems.
Keywords: Artificial Intelligence, Object Detection, YOLOv8, Adversarial Patches, Machine Learning
Unequal Risks: Ethnicity, Region, and Cybersecurity Outcomes in the United States
- Kaushik Reddy Mitta, Marc Dupuis
- Session 01 - November 12th @ 11:40 AM
Cybersecurity risks are often treated as uniform, yet disparities across demographic groups suggest otherwise. This study investigates how ethnicity and geographic region shape cybersecurity outcomes in the United States, focusing on victimization, tool adoption, and awareness. Survey data from 470 adult participants were analyzed using ANOVA, Kruskal–Wallis tests, chi-square analyses, and logistic regression models. Results reveal two paradoxes. First, Asian/Pacific Islander respondents reported higher awareness and greater use of protective tools, yet also faced significantly elevated odds of identity theft, phishing losses, and account takeovers. Second, suburban residents exhibited higher preparedness than urban or rural populations, but consistently experienced greater exposure to cyber incidents, particularly financial fraud. Hispanic/Latinx and rural groups reported lower adoption of security tools, reflecting barriers of access and language. These findings highlight that awareness and adoption alone are insufficient when structural vulnerabilities and targeted exploitation are at play. The study underscores the need for culturally competent education, expanded infrastructure access, and adaptive monitoring systems to reduce disparities and promote a more equitable cybersecurity landscape.
Keywords: cybersecurity disparities, demographic factors, cyber victimization, digital divide, ethnicity, regional differences
A Deweyan Foundation for Cultivating Reflective Cyber-Attuned Habits in an Age of AI and Ambiguity
- Jane Blanken-Webb
- Session 02 - November 12th @ 2:00 PM
Rapid advances in AI, automation, and hyperconnectivity are outpacing human habits, producing pervasive ambiguity. Drawing on John Dewey's philosophy of habit as growth through disruption and inquiry, this paper reconceptualizes cybersecurity education as cultivating reflective, cyber-attuned habits across society—not only training specialists. Dewey's account of growth through disruption, inquiry, and reorganization are translated into three educational design moves: (1) embed reflective inquiry within procedural exercises; (2) employ inquiry-based, experiential formats (e.g., capture-the-flag, cyber-defense exercises, cyber-ranges) to practice reasoning under uncertainty; and (3) extend learning to social practices of verification and shared deliberation beyond technical settings. The approach turns error into material for growth and equips learners to act with intelligent adaptability.
Keywords: cybersecurity education, John Dewey’s philosophy, habit, growth
AI-Driven Cloud Security: AIOps for Threat Detection and Compliance
- Advait Patel
- Session 02 - November 12th @ 2:20 PM
The rapid growth of cloud and hybrid computing has brought significant scale, complexity, and security challenges to IT operations. Traditional rule-based monitoring systems and signature-based Security Information and Event Management (SIEM) tools are no longer sufficient to process the enormous volume of events generated in modern environments or to provide timely, accurate detection of incidents. Artificial Intelligence for IT Operations (AIOps) has emerged as a transformative approach by combining machine learning, predictive modeling, big data analytics, and automation to improve anomaly detection, optimize resource allocation, and accelerate the process of identifying root causes. Empirical studies report that AIOps platforms can reduce mean time to detection by nearly half and cut audit preparation time by up to 60%, underscoring their advantages over conventional methods. In addition to performance monitoring, AIOps is increasingly applied to security and compliance, enabling automated evidence collection, support for zero-trust architectures, and AI-assisted remediation workflows. Despite these benefits, reliance on opaque "black-box" models raises concerns around explainability, accountability, and regulatory compliance, particularly in mission-critical domains. Multi-cloud and hybrid infrastructures further complicate deployment due to interoperability issues, data silos, and risks of algorithmic bias. This paper reviews academic and industry work on AI-driven cloud security and operations from 2022 to 2025, outlines a taxonomy of AIOps functions spanning detection, compliance, response, and governance, and identifies unresolved challenges such as adversarial resilience, transparency, and multi-cloud coordination. Finally, future directions are discussed, including explainable and neuro-symbolic AIOps, federated analytics for distributed environments, and autonomous self-healing infrastructures. The review aims to provide researchers and practitioners with a consolidated reference for developing trustworthy, scalable, and secure AI-driven cloud operations.
Keywords: Artificial Intelligence for IT Operations (AIOps), Cloud Security, Hybrid Cloud, Multi-Cloud, Predictive Analytics, Anomaly Detection, Compliance Automation, Self-Healing Systems, Zero Trust, Workflow Orchestration, Explainable AI (XAI), Adversarial Machine Learning, Federated Analytics, IT Service Management, Big Data Analytics
Best Practices in Security Convergence: Tales from the Trenches
- Michael Whitman, Herbert Mattord, Kathleen Kotwica
- Session 02 - November 12th @ 2:40 PM
Security convergence, the integration of cybersecurity and physical security, has been discussed for over two decades, yet organizations still face challenges in defining and implementing effective strategies. This article explores the evolution of security convergence, highlighting key overlaps between physical and virtual security, especially with the rise of computerized operations in critical infrastructure. Through a 2023 survey and in-depth interviews with security professionals, four best practices are identified: implementing employee risk ratings, utilizing decision matrices, establishing fusion centers, and fostering a supportive organizational culture. These practices enhance collaboration and optimize security operations, demonstrating that effective convergence is not just about structural integration but also strategic coordination and cultural alignment for improved organizational resilience.
Keywords: Security convergence, Cybersecurity and physical security integration enterprise risk management, fusion centers, Security governance, workforce risk rating
EQiLevel: Emotion-Aware Reinforcement Learning for Multimodal Academic Tutoring
- Veronica Elze
- Session 02 - November 12th @ 3:00 PM
EQiLevel is an emotionally adaptive AI tutoring system that integrates reinforcement learning (RL), large language models (LLMs), and real-time sentiment detection to personalize instruction dynamically. Traditional intelligent tutoring systems often follow rigid rules and fail to account for learners’ emotional states or individual learning preferences, reducing engagement. EQiLevel addresses this limitation by analyzing voice-based cues and adapting lesson difficulty, tone, and pacing in real time through a JSON-based Model Context Protocol (MCP). The MCP encodes emotion, performance, and learning style variables into structured state information, guiding GPT dialogue generation and reinforcement learning policy updates. Evaluation with simulated data demonstrated 78% successful adaptation to frustration cases, Whisper transcription accuracy with a 5.3% word error rate, emotion detection accuracy of 84% with 81% tone alignment, and improved RL convergence with average rewards rising from 0.41 to 0.63. In the context of cybersecurity education, EQiLevel illustrates how adaptive, emotion-aware tutoring can prepare learners to remain resilient under ambiguous and adversarial conditions, such as phishing awareness and threat analysis. By uniting technical adaptability with emotional intelligence, EQiLevel provides a scalable framework for inclusive, resilient, and effective cybersecurity education in the age of AI and automation.
Keywords: reinforcement learning, emotion-aware tutoring, cybersecurity education, adaptive learning, multimodal AI ambiguity resilience
Self-Hosted Workflow Automation For AI-Based Cybersecurity Operations
- Hareign Casaclang, Bianca Ionescu, Yoohwan Kim, Ju-Yeon Jo
- Session 02 - November 12th @ 3:20 PM
Cybersecurity operations often involve repetitive tasks such as running Nmap scans, analyzing logs, and performing Open-Source Intelligence (OSINT) investigations. These processes are essential for maintaining security but consume time and resources that many small organizations cannot spare. While commercial automation platforms exist to reduce this workload, they are typically costly and inaccessible to businesses without dedicated IT staff. This paper investigates n8n, a self-hosted and low-cost workflow automation platform, as a practical alternative for cybersecurity automation. By integrating security tools and external large language models (LLMs) such as ChatGPT, Gemini, and Ollama, n8n can automate vulnerability scanning, assign severity ratings, and generate reports tailored to both technical and executive stakeholders. Experiments show that n8n workflows can effectively combine traditional scans with Artificial Intelligence (AI)-driven analysis to produce actionable outputs. Although limitations remain, including a steep learning curve and restrictions in the free tier, n8n demonstrates potential for broadening access to automation in cybersecurity. For small organizations, this approach provides a cost-effective way to strengthen security posture, while in academic contexts it provides a hands-on platform for teaching and experimenting with automation and AI in cybersecurity.
Keywords: Cybersecurity, Workflow Automation, n8n, Nmap, Artificial Intelligence (AI), Vulnerability scanning
Analysis of Cybersecurity Risks and Teenage Digital Behavior Patterns
- Eric McCloy, Samuel Nimako-Mensah, Albert Samigullin
- Session 03 - November 12th @ 4:00 PM
As teenagers increasingly engage with digital technology, cybersecurity vulnerabilities present significant risks to their online safety and privacy. Adolescents who lack awareness of secure online practices are particularly vulnerable to malicious actors seeking to exploit them. This paper investigates the relationship between real-world online behavior of teenagers, cybersecurity risks, and device interactions. The primary data set used for this analysis is Teenage online behavior and cybersecurity risks. First, we consider demographic information: age, education, time spent online. We correlate this with online behaviors: use of a VPN, type of equipment (computer, mobile), use of public internet, engagement with risky websites. Finally, we analyze the data set using a combination of demographic and behavioral patterns to search for high-risk, negative outcomes. For our research, we analyze teenage online behavior patterns to identify key risk factors, develop predictive models for cybersecurity vulnerabilities, and produce actionable visualizations that illustrate the relationship between digital literacy and online safety. Our findings utilize data and business analytics to provide evidence-based recommendations for parents, educators, and policymakers to enhance teenage cybersecurity awareness and protective strategies.
Keywords: online behavior, teenagers, adolescent, privacy, cybersecurity risk, data analysis
CodeWars: Using LLMs for Vulnerability Analysis in Cybersecurity Education
- Arunima Chaudhary, Walter Colombo, Amir Javed, Junaid Haseeb, Vimal Kumar, Fernando Alva Manchego, Richard Larsen
- Session 03 - November 12th @ 4:20 PM
Large Language Models (LLMs) are increasingly explored as tools for software development and could further constitute a supplementary source for the development of varied examples intended for pedagogical use. While they can improve productivity, their ability to produce code that is both secure and compliant with Secure Software Development (SSD) practices remains uncertain, raising concerns about their role in cybersecurity education. If LLMs are to be integrated effectively, students must be trained to critically evaluate generated code for correctness and vulnerabilities, raising an important question: How can LLM-generated code be effectively and securely incorporated into Cybersecurity education for teaching vulnerability analysis? This paper introduces CodeWars, a novel teaching methodology that combines LLM-generated and human-written code to examine how students engage with vulnerability detection tasks. CodeWars was implemented as a pilot study with a total of 32 students at Cardiff University and the University of Waikato, where students analyzed flawed, secure, and mixed-origin code samples. By comparing student approaches, analysis, and perceptions, the study provides insights into how vulnerabilities are detected, how code origins are distinguished, and how SSD practices are applied. Our analysis of student feedback and interviews indicates that Codewars produced structured and accessible code, simplifying vulnerability identification and offering educators the means to efficiently develop varied SSD teaching applications. These findings illuminate both the advantages and constraints of employing LLMs in secure coding and position this study as a foundational step toward the responsible adoption of AI in Cybersecurity education.
Keywords: Cybersecurity Education, Vulnerability analysis, Secure Software Development, Large Language Models, Cyber-security Pedagogy, GenAI
From Social Sharing to Security Lessons: Behaviors, Disclosure, and Cyber Threats
- Sav Wheeler, Marc Dupuis
- Session 03 - November 12th @ 4:40 PM
As social networking sites (SNSs) have become integral to daily life, concerns about privacy and cybersecurity risks have intensified. Malicious actors exploit SNSs for phishing, malware distribution, and identity-driven attacks, often leveraging personal information voluntarily disclosed by users. This study investigates the relationships between SNS usage, personal information disclosure, cybersecurity behaviors, and experiences with cybersecurity threats. We employed a mixed-methods approach, combining survey data from 275 participants with semi-structured interviews. Correlation analyses revealed that frequency of SNS use and usage motivations---particularly for meeting new people and for self-presentation---were positively associated with higher levels of personal information disclosure. Disclosure of personal information and frequency of SNS usage were also significantly correlated with reported experiences of cybersecurity threats, though less so with protective cybersecurity behaviors. Interview responses highlighted both direct encounters with threats and broader perceptions of privacy vulnerabilities. Together, these findings underscore the complex interplay between social behavior on SNSs and cybersecurity risks, suggesting that greater user education and platform-level safeguards are necessary to mitigate emerging threats. We conclude with implications for cybersecurity awareness efforts and recommendations for future research.
Keywords: social networking sites, cybersecurity behavior, personal information disclosure, social engineering, privacy, mixed methods
Past Experience and Threat Awareness as Determinants of Regular Information Backup
- Benard Birundu, Marc Dupuis
- Session 03 - November 12th @ 5:00 PM
Cybersecurity threats continue to evolve in sophistication, increasingly targeting individuals as the weakest link in the security chain. While technical solutions remain essential, human-centered protective behaviors such as regular information backup are critical to mitigating risks from ransomware, device failure, and accidental deletion. This study investigates how past experiences with cyber incidents and awareness of threats influence backup practices among individuals in the United States. Drawing on Protection Motivation Theory (PMT), we conducted a mixed-methods survey (N{=}308) that measured threat appraisal (perceived severity, vulnerability) and coping appraisal (self-efficacy, response efficacy, response cost), along with threat awareness and prior experience. Multiple regression explained nearly half of the variance in backup behavior ($R^2{=}0.498$), with self-efficacy and threat awareness emerging as strong positive predictors; response cost was negative and significant. Qualitative responses illustrated experience- and awareness-driven adoption and highlighted common hybrid routines (cloud plus removable media) as well as barriers related to perceived effort or low perceived value. The findings underscore the importance of integrating human factors into cybersecurity programs and suggest concrete levers for awareness, design, and policy that reduce friction and promote routine backups.
Keywords: Cybersecurity, Protection Motivation Theory, Threat Awareness, Past Experience, Information Backup, Human Factors
Roll with it: Awareness raising with Cyber Defence Dice
- Steven Furnell, Lucija Šmid, James Todd, Xavier Carpent, Simon Castle-Green
- Session 03 - November 12th @ 5:20 PM
Cybersecurity awareness is widely recognised as an important requirement, but is frequently overlooked or addressed in ways that do not engage the interest of the target audience. In an attempt to broaden the options available for achieving this, the paper discusses the concept, design and evaluation of a new dice-based game designed to promote entry-level cyber security awareness in relation to common forms of attack and defence. The concept of the game is that players can defend against prior attacks, or use attacks to test defences, with the images on the different die faces denoting threats and safeguards of different strengths and which are countered in different ways. The discussion includes a worked example of the one of the game modes that has been designed in order to illustrate how players would take turns and make decisions in practice. It then presents initial results from a series of seven hands-on playtest sessions conducted with a range of audiences, including the general public, cyber educators and cyber professionals. The findings indicate that all audiences were positive about the game concept and found it enjoyable to play. Additionally, it was recognised to have value in raising and maintaining awareness, and would be a game that participants would play again and recommend to others.
Keywords: Cyber awareness, Cyber engagement, Dice, Gamification
Building Nuclear-Specific Cybersecurity Expertise in Higher Education
- Amorita Christian, Myles Nelson, Tiffany Fuhrmann
- Session 04 - November 13th @ 10:20 AM
The rapid digitalization of nuclear power plants (NPPs) and the deployment of advanced and small modular reactors (A/SMRs) have expanded the cybersecurity attack surface within the nuclear sector. This evolution introduces unique challenges beyond those faced in general information technology (IT), operational technology (OT) and industrial control system (ICS) security, due to nuclear power’s regulatory rigor, safety-critical nature, and operational needs.
A pressing workforce gap persists; cybersecurity graduates typically lack nuclear-specific context and retraining them for industry readiness requires 12–18 months, creating a significant burden. This paper addresses this gap by defining the domains of knowledge that nuclear cybersecurity specialists must master, spanning cybersecurity, nuclear engineering, OT/ICS security, and regulatory governance. We propose a curricular framework integrating technical, regulatory, and applied learning components to accelerate workforce readiness.
Our approach builds on existing findings that current curricula inadequately integrate nuclear engineering and cybersecurity, shifting the discourse from why specialization is needed to what knowledge must be taught. The recommendations have implications for workforce development and long-term resilience of the nuclear energy sector.
Keywords: nuclear cybersecurity, workforce development, higher education, OT/IT cybersecurity
Lost in Translation: Evaluating Cybersecurity Policy and Terminology Accuracy for Multilingual Learners
- Andrew Hurd, Gloria Kramer, Pamela Doran
- Session 04 - November 13th @ 10:40 AM
As cybersecurity becomes a cornerstone of global higher education, language has emerged as an unexpected point of vulnerability. Machine translation (MT) tools, increasingly used to render cybersecurity policies into multiple languages, often distort meaning by translating technical terms literally rather than conceptually. Words like firewall, phishing, or backdoor lose their intent in translation, creating barriers to comprehension and leaving multilingual learners at risk of misunderstanding critical policies. This paper explores the intersection of cybersecurity vocabulary, machine translation, and language equity, drawing on examples of mistranslations and language acquisition research to demonstrate how linguistic gaps can weaken both institutional safeguards and student confidence.
We argue that cybersecurity education must treat terminology with the same precision as code, recognizing that mistranslation not only undermines clarity but also compounds anxiety for multilingual learners navigating complex technical content. To address these challenges, the paper examines strategies such as the use of back-translation, custom glossaries, Universal Design for Learning (UDL) frameworks, and emerging AI translation tools like custom ChatGPT models. Together, these approaches highlight a pathway for higher education to balance inclusion with accuracy, ensuring that policies and coursework maintain both technical rigor and accessibility. By reframing cybersecurity not only as a technical field but also as a linguistic one, this research calls for a more intentional, equity-driven approach to translation that secures both data and learning outcomes.
Keywords: Cybersecurity education, Machine translation (MT), Multilingual learners, Universal Design for Learning (UDL), Artificial intelligence (AI) in translation, Custom ChatGPT, Higher education policy, Foreign Language Classroom Anxiety (FLCA)
Mapping the Gap: Analysis of Nuclear Cybersecurity Education in U.S. Universities
- Myles Nelson, Amorita Christian, Tiffany Fuhrmann, Charles Nickerson
- Session 04 - November 13th @ 11:00 AM
The U.S. nuclear sector is undergoing rapid transformation, driven by the expansion of advanced reactors, digital modernization of legacy systems, and increasing interest in nuclear energy to meet AI-fueled energy demands. However, the cybersecurity talent pipeline is not keeping pace with this growth. This paper investigates the significant gap in nuclear cybersecurity education and proposes scalable strategies for colleges to address this critical need by promoting it as a viable and essential career path.
Through a multi-institutional landscape analysis of 16 cybersecurity and 12 nuclear engineering programs, we found that nuclear cybersecurity is largely absent from university curricula. Most students are unaware of the field’s existence, and few institutions offer hands-on training or interdisciplinary exposure. This lack of awareness leads to a shortage of specialized talent, forcing nuclear facilities to retrain generalist hires or rely on costly external consultants.
We present a framework for early pipeline cultivation grounded in Social Cognitive Career Theory and workforce development principles. Proposed solutions include student-led clubs, guest lectures, modular classroom kits, and summer boot camps. By increasing visibility and access to nuclear cyber content, we aim to break the self-reinforcing cycle of low awareness and limited specialization. This work underscores the critical role of education and advocacy in cultivating early interest and guiding students toward this emerging field. We call on academic institutions, national laboratories, and industry stakeholders to collaborate in establishing nuclear cybersecurity as a distinct and accessible career path within the broader cybersecurity and nuclear engineering ecosystems.
Keywords: nuclear cybersecurity, workforce development, curriculum, OT/IT cybersecurity
Play NICE: Incorporating Cyber Phraseology into K-12 Education
- Timothy Crisp, John Hale
- Session 04 - November 13th @ 11:20 AM
Cyberattacks on critical infrastructures motivate a focus on cybersecurity awareness. A knowledge gap exists in the technical and non-technical understanding of cybersecurity, in the workforce. Closing this gap requires a multi-faceted approach -- of extreme importance is education. We use the NICE Workforce Framework TKS statements to develop a model of the most generalizable requirements needed to practice cybersecurity. We apply this model to increase cybersecurity language use and comprehension in all K-12 subjects, walking educators through a process incorporating cybersecurity into their lessons.
Keywords: Cybersecurity Education, K-12, Cybersecurity Literacy, NICE Workforce Framework
Teaching Critical Infrastructure Security Through Interactive Experiences: Modeling Cyberattacks in Gamified Learning
- Ella Luedeke, Meera Sridhar, Harini Ramaprasad
- Session 04 - November 13th @ 11:40 AM
This work introduces InfraLearn, a gamified learning platform designed to teach non-computer science students foundational background in cybersecurity for critical infrastructure. InfraLearn simulates attacks on a Distributed Energy Resource (DER) device, modeled after the Enphase Gateway solar monitor and implemented using a Flask-based API. Three prototype scenarios are developed: API spoofing, unauthorized remote shutdowns, and Living-off-the-Land (LoTL) downgrade exploitation. These scenarios are derived from real-world vulnerabilities in DER systems and integrated into a narrative-driven, web-based platform. Students interact with pre-configured virtual machines, guided code templates, and checkpoint quizzes, with optional AI support that reinforces comprehension while minimizing the need for prior programming experience. By situating cybersecurity concepts within the context of energy systems, InfraLearn makes abstract threats tangible and emphasizes the ethical application of defensive skills. This work demonstrates a scalable approach to engaging future engineers in securing critical infrastructure.
Keywords: Critical Infrastructure, Cybersecurity, Gamified Learning, Interactive
A Systematic Review of Residual Risk in Cybersecurity Awareness Training
- Venkat Laxmi Sateesh Nutulapati
- Session 05 - November 13th @ 2:00 PM
Cybersecurity awareness training is central to education and practice, yet persistent human error continues to expose organizations to breaches. AI-enabled attacks such as deepfakes, voice-cloned vishing, and automated spear phishing make these vulnerabilities even more consequential. This systematic review synthesizes 26 studies (2008–2025) using varied designs and training formats, from gamified learning and face-to-face sessions to e-learning, nudges, and simulated phishing. We introduced a residual-risk framework to capture outcomes that traditional effectiveness measures overlook. Residual Insecure Behavior (RIB) reflects the percentage of participants who continued risky practices after training, while Residual Knowledge Gap (RKG) indicates knowledge deficits that persisted. Across studies, improvements were common, but residual risks remained significant with phishing susceptibility often exceeding 10%, and knowledge gaps frequently surpassing 30%. Gamified approaches showed stronger behavioral effects, while conventional methods often raised awareness but left large gaps. For educators, these findings underscore that statistical gains can mask enduring weaknesses. By teaching and applying RIB and RKG, instructors can help students, practitioners, and organizations focus not just on learning outcomes, but on reducing real-world exposure in an AI-driven threat landscape.
Keywords: cybersecurity awareness, residual risk, insecure behavior, training effectiveness
CyberGLA: Protection Against Advanced AI-Powered Phishing Threats
- Weihao Qu, Gurmeet Singh, Daniel Crawford, Bingjun Li, Jalen Smith
- Session 05 - November 13th @ 2:20 PM
The rapid development of artificial intelligence, including agents and deepfake techniques, has accelerated phishing attacks and lowered the threshold for attackers. Modern phishing attacks now blend multiple tactics, including social engineering, URL spoofing, and AI deepfakes enabling adversaries to craft highly convincing messages that exploit human vulnerabilities and bypass traditional detection systems.
At the same time, current security awareness education struggles to keep up with the speed, sophistication, and complexity of these evolving threats.
To address this challenge, we propose a two-stage anti-phishing framework, CyberGLA, that combines technical defense and user-centered security education. In the Detection stage, we introduce EmailKnight, a spoof detection tool that performs multi-level email analysis.
Keywords: phishing detection, cybersecurity training, large language models, email spoofing, deepfake attacks, security awareness, human factors
Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)
- Abel Ureste, Hyungbae Park, Tamirat Abegaz
- Session 05 - November 13th @ 2:40 PM
AI is being interconnected with vital systems at an exponential rate, being described as the greatest shift in technology since the invention of the Internet. However, with the emergence of AI also involves the introduction of new critical vulnerabilities in the technology sector. This research will discuss the types of prompt injection attacks that AI can be subjected to, what they target and the possible repercussions of prompt injections. To counteract these attacks, solutions to detect different types of prompt injection will also be discussed, giving solutions to mitigate attacks that can expose critical data. Along with the solutions, different trade-offs between these solutions will be given. This research aims to expose the security issues that arise with the rapid implementation of experimental AI involving prompt injection and how to prevent it.
Keywords: artificial intelligence, AI, prompt injection, prompt mitigation, Ollama
Development and Validation of a Healthcare Workers Phishing Risk Exposure (HWPRE) Taxonomy for Mobile Email
- Christopher Collins, Yair Levy, Gregory Simco, Ling Wang
- Session 05 - November 13th @ 3:00 PM
Email on mobile has become a dominant communication channel for healthcare professionals, yet its constrained interface and context of use amplify vulnerability to social engineering attacks, especially phishing. This paper reports the development and empirical validation of the Healthcare Workers Phishing Risk Exposure (HWPRE) taxonomy, a 2×2 framework that positions individuals by (i) general email-phishing susceptibility; and (ii) ability to detect mobile-specific phishing cues. We followed a sequential three-phase design: (1) a Delphi study with cybersecurity subject matter experts to validate mobile-relevant phishing indicators and components of a susceptibility index; (2) a pilot to refine instruments and procedures; as well as (3) a large-scale study (N=300 healthcare workers) using scenario-based assessments on smartphone-generated email stimuli. We present the construction of the Healthcare Workers Email Phishing Susceptibility Index (HWEPSI), reliability/validity evidence, and statistical analyses relating HWPRE placement to role, experience, medical departments, prior training, and demographic indicators. The results show significant heterogeneity across departments and experience bands; in addition, the ability to recognize mobile cues does not follow uniformly with general susceptibility. We discuss implications for targeted Security Education, Training, and Awareness (SETA) programs and measurement-driven program evaluation. We conclude with practical guidance for integrating HWPRE into organizational phishing defense and directions for future research.
Keywords: Phishing, Social engineering, Healthcare cybersecurity, Mobile device cybersecurity, Human factors in cybersecurity, SETA in healthcare
Teaching Endpoint Protection through Wazuh: A Project-Based Approach to Cybersecurity Education
- Sara Sutton, Victor Kipchirchir Bunge, Xinli Wang, Johnfia Frank, Esther Djan
- Session 05 - November 13th @ 3:20 PM
In recent years, the demand for practical, real-world cybersecurity education has grown dramatically. Traditional lecture- based methods often fall short in equipping students with the applied skills needed to detect, analyze, and respond to current cyber threats. This paper presents a project-based educational framework focused on the deployment, configuration, and use of real-world software such as Wazuh. Rather than following predetermined steps, students engage with realistic endpoint and network security scenarios, such as installing and configuring Wazuh agents, monitoring and interpreting live system and application logs, detecting simulated security incidents such as brute-force attacks and malware execution, and applying industry-aligned procedures. Evaluation of student performance demonstrates substantial improvements in alert interpretation, rule configuration, and application of cybersecurity knowledge. Our findings indicate that integrating Wazuh into coursework effectively develops both practical technical skills and analytical thinking, aligns with national workforce competency standards, and provides a model that other courses can adopt to integrate enterprise security tools into the classroom.
Keywords: Cybersecurity Education, Case Studies, Wazuh, Experiential Learning, Educational Framework
AI and ML Attacks on IC Hardware Security: Demonstration for Cybersecurity Students
- Danai Chasaki
- Session 06 - November 13th @ 4:00 PM
The IC design and security industry depends on trusted systems, yet remains challenged by an increasingly fragmented supply chain and evolving threat landscape. The rise of fabless enterprises and the proliferation of AI/ML technologies have further exposed hardware security to new vulnerabilities. This paper provides proof-of-concept implementations of emerging threats posed by machine learning to IC hardware design, focusing on two distinct areas: GNN-based attacks on logic locking and the insertion of hardware Trojans via large language models. These represent growing and independent research directions in hardware security. We showcase and analyze two representative examples from each category to highlight the risks of unmitigated ML-driven attacks.
Keywords: Machine Learning, Logic Locking, HardwareTrojans, Large Language Models, Hardware Security, Integrated Circuit Design
Cybercamp: An Experience Report on the Transformations of an Intensive Cybersecurity Summer Camp for High School Students
- Jose R. Ortiz Ubarri, Rio Piedras, Rafael A. Arce Nazario, Rio Piedras, Kariluz Dávila Diaz
- Session 06 - November 13th @ 4:20 PM
The Cybercamp is a Cybersecurity summer camp for high school students that has been held for the last nine years at a Hispanic Serving Institution. Since its inception in 2016 the Cybercamp has undergone several transformations in response to budget reductions and the COVID pandemic, to finally become its current version: a rich, hands-on learning experience that we believe is easily replicable even in resourcechallenged environments.
In this paper, we document the transformations of the Cybercamp and discuss the developed curriculum and materials in hopes that others will reuse, adapt, and improve upon them. In the Cybercamp, we apply active learning practices that have been shown to be effective in STEM education. We perform handson, activities, providing Capture The Flag (CTF) style practice exercises with automatic grading. The Cybercamp is assisted by college students who serve as peer-assisted leaders. The educational materials were designed with culturally relevant case studies, using open source technologies, and following Universal Design Learning best practices to make them as accessible as possible, particularly to low-income students. We discuss how we prevented students from falling behind and successfully completed the Cybercamp.
Keywords: Computer, Science, Education, Cybersecurity, active-learning, hands-on, ctf-style-exercises, universal-design-learning
Examining the Capabilities of GhidraMCP and Claude LLM for Reverse Engineering Malware
- Joshua Cole Watson, Bryson R. Payne, Denise McWilliams, Mingyuan Yan
- Session 06 - November 13th @ 4:40 PM
Can Large Language Models (LLMs) enhance the efficiency of reverse engineering by assisting malware analysts in the de-obfuscation of ransomware and other forms of malicious software? This research explores the integration of LLMs into reverse engineering workflows through the use of GhidraMCP, a plugin designed for the Ghidra open-source software reverse engineering suite. GhidraMCP leverages the capabilities of Claude’s Sonnet 4 model (as well as other LLMs) to rename decompiled variables and functions, generate descriptive annotations for disassembled code, and highlight potentially relevant strings or routines. These features are intended to reduce the cognitive load on analysts and accelerate the identification of critical components such as encryption routines, embedded URLs, command-and-control (C2) indicators, and external library calls within malware samples.
This study compares traditional reverse engineering workflows with LLM-augmented workflows using GhidraMCP. Multiple pseudo-ransomware samples were analyzed to assess differences in discovery efficiency, accuracy of function labeling, and qualitative analytical quality. Although no formal timing metrics were recorded, the research team determined that the LLM-augmented process consistently achieved insights more quickly and with fewer manual steps. In several instances, Claude Sonnet 4 successfully identified static relationships and artifacts that human analysts initially overlooked, demonstrating its potential to enhance traditional workflows through contextual inference and advanced pattern recognition.
The combination of GhidraMCP and Claude Sonnet 4 effectively leveraged static analysis to identify the hidden flags for ESCALATE challenges one through seven. However, while the research team was ultimately able to solve all challenges, several required dynamic analysis and binary patching—tasks that the current LLM-augmented setup could not perform due to the lack of patching capabilities within GhidraMCP. It remains unclear whether this limitation stems from Ghidra, the plugin, or the integration framework itself. During testing, Claude Sonnet 4 occasionally exhibited hallucinations, producing inaccurate or speculative annotations that required human correction and additional prompting, particularly during challenges three and four. These occurrences emphasize the ongoing need for human oversight and iterative validation when employing generative AI in critical cybersecurity tasks.
Despite these limitations, the findings indicate that LLM-augmented reverse engineering can meaningfully improve analytical comprehension, efficiency, and context awareness. Claude Sonnet 4’s linguistic reasoning and ability to infer code intent proved especially valuable for de-obfuscating complex binaries. Future work will focus on enabling dynamic capabilities within GhidraMCP to support patching and execution-based testing, as well as refining prompt strategies and hallucination detection. This research establishes a foundation for the continued development of intelligent, LLM-assisted tooling designed to augment human expertise in malware analysis and reverse engineering.
Keywords: LLMs, Malware Analysis, GhidraMCP, Reverse Engineering
Integrating Vulnerability Assessments with Security Control Compliance
- Ernesto Ortiz, Aurora Duskin, Noah Hassett, Clemente Izurieta, Ann Marie Reinhold
- Session 06 - November 13th @ 5:00 PM
Information technology providers must implement security controls to protect client and partner data, as well as comply with government security requirements. The preparation of security compliance documentation is a slow process due to the manual efforts involved. We present SSP Manager, a framework that streamlines compliance and supports the building and maintenance of System Security Plans. The tool integrates vulnerability assessment, program analysis, and control monitoring by i) implementing a security control prioritization strategy that outputs NIST SP 800-53 controls to mitigate MITRE ATT&CK techniques, ii) by incorporating reachability analysis of Python dependencies to filter out false positives from vulnerability scan results, and iii) by providing compliance monitoring of functionality based on Chef’s InSpec testing framework and the Open Policy Agent policy engine. Test reports are generated in a machine-readable format for easy integration into automated compliance pipelines. Our work bridges the gap between vulnerability assessment and security compliance. Moreover, it reduces the manual overhead in security workflows.
Keywords: NIST SP 800-53, MITRE ATT&CK, System Security Plan, Vulnerability assessment, Dependency scanning, Policy-as-Code, Compliance-as-Code, Open Policy Agent, Chef InSpec, Security control prioritization, Security compliance testing
Cybersecurity Education with Generative AI: Creating Interactive Labs from Microelectronic Fundamentals to IoT Security Exploitation
- Kushal Badal, Xiaohong Yuan, Huirong Fu, Darrin Hanna, Jason Gorski
- Session 07 - November 13th @ 4:00 PM
Creating engaging cybersecurity education materials typically requires months of development time and specialized expertise. This paper describes how we used generative AI to address this challenge. We utilized Claude AI to generate a complete interactive platform that teaches students basic microelectronics through IoT hacking. Through iterative prompting, we generated more than 15,000 lines of functional code, including interactive visualizations, Python security tools, and gamified quizzes with real-time leaderboards. The curriculum guides students through the evolution of computing—from vacuum tubes to modern IoT devices—then helps them apply this foundation to discover real vulnerabilities. We implemented this platform at a GenCyber summer camp with 40 participants, where students identified actual security issues in AmpliPi audio systems—open-source network audio devices designed for multi-room audio distribution—including password weaknesses and denial of service flaws. The entire development process took only three weeks instead of the typical several months. The AI produced quality educational content, although we reviewed everything for technical accuracy and ethical considerations. During the camp, students remained engaged through competitive elements and hands-on labs, learning both theoretical concepts and practical skills. The students used AI-generated tools, including working implementations of SlowLoris and dictionary attacks, to test real systems. Our experience demonstrates that generative AI can efficiently create effective cybersecurity education materials that remain technically current. All materials are publicly available on GitHub for educational use. This approach could help educators stay on track with the rapidly evolving technology despite traditional curriculum development constraints.
Keywords: Generative AI, Cybersecurity Education, Interactive Learning, IoT Security, Hands-On Labs, Curriculum Development
Distributed Agency in AI-Enhanced Cybersecurity Education: A Posthuman Instructional Design Framework
- Ryan Straight, Josh Herron
- Session 07 - November 13th @ 4:20 PM
This paper addresses a critical challenge facing cybersecurity educators: preparing students for AI-enhanced practice environments where effective action emerges from human-AI collaboration rather than individual expertise. Traditional instructional design frameworks assume human-centered learning processes that inadequately address distributed agency realities in contemporary cybersecurity operations. Drawing on Adams and Thompson’s posthuman inquiry methodology, this analysis develops a comprehensive pedagogical framework consisting of four principles: (1) Design for the Assemblage, Not the Individual, (2) Cultivate Relationality and Response-ability, (3) Embrace Emergence, Messiness, and Indeterminacy, and (4) Posthuman Assessment Approaches. The framework provides concrete instructional design implications, including strategies for configuring human-AI learning relations, integrating AI literacies across cognitive, civic, creative, and critical dimensions, and developing assemblage-aware cybersecurity case studies. These design implications bridge theoretical posthuman concepts with practical curriculum implementation through the lens of curriculum-as-lived rather than curriculum-as-plan. Preliminary implementation observations from an undergraduate cybersecurity ethics course demonstrate how posthuman-designed scenarios enable students to develop comfort with complexity and distributed analysis. Student reflections reveal progression from seeking singular solutions to embracing multiple valid perspectives, suggesting effective cultivation of human-AI collaborative competencies. The framework equips cybersecurity educators with both theoretical foundations and actionable design strategies for preparing students for distributed agency practice environments.
Keywords: distributed agency, artificial intelligence, cybersecurity education, posthumanism, human-AI collaboration, instructional design
Integrating Artificial Intelligence into Undergraduate Cybersecurity Education: A Course Design for Threat Detection, Explainability, and Ethical Resilience
- Vahid Heydari, Kofi Nyarko
- Session 07 - November 13th @ 4:40 PM
This paper introduces an undergraduate course, Artificial Intelligence Applications in Cybersecurity, designed to equip students with Artificial Intelligence (AI) and Machine Learning (ML) skills to address modern cyber threats. The curriculum integrates supervised and unsupervised learning, deep learning, explainable AI (XAI), adversarial ML, and ethical considerations. Using accessible tools (Python, Google Colab) and real-world datasets (e.g., NSL-KDD, CICIDS2017, malware corpora), students complete phased projects progressing from classical ML baselines to deep learning with interpretability (SHAP/LIME) and robustness against adversarial attacks (FGSM/PGD with mitigation). The course aligns with data science and cybersecurity workforce frameworks, emphasizing reproducibility, communication, and responsible AI practices.
Keywords: Artificial Intelligence (AI), Cybersecurity Education, Machine Learning (ML), Explainable AI (XAI), Adversarial Machine Learning, Ethics and Fairness, Undergraduate Curriculum Design, Workforce Development
Toward Experiential Training Program for AI Security and Privacy Practitioners
- Mohammed Abuhamad, Mujtaba Nazari, Loretta Stalans, Eric Chan-Tin
- Session 07 - November 13th @ 5:00 PM
The rapid adoption of artificial intelligence across industries has outpaced security and privacy training for AI practitioners. This paper presents methods, modules, and findings from an experiential training program designed to address security and privacy challenges in AI systems development and deployment. We conducted two program iterations: a comprehensive 12-workshop series (May-October 2024) and a condensed 6-workshop format (January-February 2025). The program combined expert-led panel sessions with hands-on laboratory activities, engaging 78 participants from diverse professional backgrounds. Evaluation through pre-and post-evaluation surveys and qualitative observations revealed improvements in cybersecurity knowledge and AI security awareness.Participants demonstrated enhanced ability to identify vulnerabilities, implement security measures, and develop organizational policies for AI-related risk mitigation. The condensed format showed comparable learning outcomes with improved completion rates. This effort highlights the increased need to establish cybersecurity and privacy training for AI professionals to develop secure and trustworthy AI systems.
Keywords: AI security, cybersecurity training, privacy-preserving AI, experiential learning, professional development, adversarial machine learning
An AI Agent Workflow for Generating Contextual Cybersecurity Hints
- Hsiao-An Wang, Joshua Goldberg, Audrey Fruean, Zixuan Zou, Ruoyu Zhao, Sakib Miazi, Jens Mache, Ishan Abraham, Taylor Wolff, Jack Cook, Richard Weiss
- Session 07 - November 13th @ 5:20 PM
Large Language Models (LLMs) have proven beneficial in aiding student learning across a multitude of domains such as computer science, data science, and mathematics. While these chatbots show promise, they can be impractical to deploy in situations where quality student data is not available.
Hint generation for cybersecurity can be feasible with the technology we have today if we make the right simplifications. First, we need human-in-the-loop systems because the training data used to train general LLMs may not cover cybersecurity well. Second, using modular agents allows us to access dynamic data that is outside the training dataset and add specificity to the hints.
In this paper, we leverage n8n, an agent deployment service, to establish the connections between our agents and Discord, the messaging system that our classroom uses to offer students a streamlined learning experience when working on interactive cybersecurity exercises. We have tested this in the classroom and have shown that it is feasible.
Keywords: Cybersecurity, Cybersecurity Education, Generative AI, Retrieval Augmented Generation
Interactive Cybersecurity Lab: Hands-on Cybersecurity Training
- Chrstian Soucy, Danai Chasaki
- Session 08 - November 13th @ 4:00 PM
Cybersecurity is a dynamic and essential discipline focused on protecting data and systems from malicious threats. Its scope spans personal devices, industrial systems, medical technologies, financial data, and Personally Identifiable Information (PII). As its integration into society and industry deepens, emerging vulnerabilities demand a skilled workforce capable of securing critical infrastructure. This proposed training introduces foundational cybersecurity concepts through a structured series of puzzles and challenges, organized into progressive stages.
Keywords: cybersecurity, training, lab, puzzle, CTF, UI, UX, interactive, skill, development, framework, exercises, challenges, education
Security and Privacy of Wearable and Implantable Medical Devices: A Health Informatics Course on Security and Privacy of Wearable and Implantable Medical Devices
- Michelle Mical Ramim
- Session 08 - November 13th @ 4:20 PM
As wearable and implantable medical devices become fundamental to remote patient monitoring and precision medicine, the associated security and privacy risks demand urgent attention. These devices are increasingly targeted by cybercriminals, potentially endangering patient safety and data integrity. Specifically, it has been documented that vulnerabilities in medical devices have been exploited to alter device behavior or interfere with clinical treatment delivery. Despite these known vulnerabilities, wearable and implantable medical devices have become integral to modern patient care, offering innovative ways to monitor, manage, and even remotely treat various health conditions. These devices are essential to remote patient monitoring and precision medicine; the real-time data they capture is increasingly integrated into electronic health records (EHRs) to support clinical decision-making and enhance workflow efficiency. At the same time, most medical and healthcare students around the nation are not well educated to deal with cybersecurity issues. Subsequently, medical and healthcare students should understand the vulnerabilities associated with wearable and implantable devices, the risks they pose, and the importance of regulatory compliance, including the Health Insurance Portability and Accountability Act (HIPAA). To address this, we developed an experiential learning course titled Security and Privacy of Wearable and Implantable Medical Devices, designed for advanced undergraduate and graduate students in health and medical fields. The course immerses students in real-world challenges through lectures, labs, and project-based learning, leveraging wearable devices such as FitBitTM to analyze and interpret real-time personal health data. The curriculum covers critical topics including data security, privacy, HIPAA compliance, data visualization, interoperability, and real-world cyberattack case studies. The learning objectives align with the Commission on Accreditation for Health Informatics & Information Management Education (CAHIIM) standards and Miller’s Pyramid of Clinical Competence to ensure industry-relevant competencies and progressive skill development. Interactive lectures were designed to promote engagement and featured expert guest speakers from health information technology (IT) and cybersecurity sectors. Case-based discussions encouraged students to consider the implications of cyberattacks on patient safety and health outcomes. The lab component offered a structured environment for technical practice, such as configuring wearable devices, extracting and visualizing data, and evaluating the security of data transmission. Lab assignments played a central role in reinforcing the key concepts introduced in lectures and assigned readings. By combining didactic instruction with applied learning and real-world examples, the class components provided a robust experiential learning that mirrors current challenges faced by healthcare.
Validating the fundamental cybersecurity competency index (FCCI) through expert evaluation for human-generative artificial intelligence (GenAI) teaming
- Witko, Yair Levy, Catherine Neubauer, Gregory Simco, Laurie Dringus, Melissa Carlton
- Session 08 - November 13th @ 4:40 PM
The increasing volume of cyber threats, combined with a critical shortage of skilled professionals and rising burnout among practitioners, highlights the urgent need for innovative solutions in cybersecurity operations. Generative Artificial Intelligence (GenAI) offers promising potential to augment human analysts in cybersecurity, but its integration requires rigorous validation of the fundamental competencies that enable effective collaboration of human-GenAI teams. Fundamental cybersecurity competencies, encompassing essential cybersecurity Knowledge, Skills, and Tasks completion (KSTs). Competency is defined as the ability to complete tasks within a work role. In this research study, we employed a mixed-methods research approach designed to evaluate human-GenAI teams, emphasizing the role of expert consensus in shaping the experimental assessment of the Fundamental Cybersecurity Competency Index (FCCI) in a commercial cyber range. Selecting a commercial cyber range allowed us to identify the specific KSTs from the United States (U.S.) Department of Defense (DoD) Cyber Workforce Framework (DCWF) and measure them at the KSTs level. The specific commercial cyber range we assessed enables the extraction of the users’ performance at the KST level. To validate the proposed experimental assessment of the FCCI and confirm the relevance of the selected cybersecurity KSTs, a panel of 20 Subject Matter Experts (SMEs) was engaged to evaluate and validate the proposed competency measures. The expert panel refined the cybersecurity scenarios and experimental procedures used in the commercial cyber range hands-on scenarios, ensuring alignment with DCWF. Our findings indicated that 46 of 47 fundamental cybersecurity KSTs were validated by the SMEs as essential components of the FCCI. Consensus levels of 85–90% confirmed strong expert support for incorporating GenAI (e.g., large language models such as ChatGPT) as a teammate or decision-support agent in these controlled experiments. The validated scenarios and experiments pave the way for future research on assessing cybersecurity competencies in commercial cyber range platforms with and without GenAI support (e.g., large language models such as ChatGPT). By establishing the baseline for competency assessment in this research, the SMEs’ feedback contributed to advancing cybersecurity workforce development and provided critical insights for integrating GenAI into collaborative cybersecurity human-GenAI teaming operations. The validated FCCI provides a robust mechanism to evaluate both human and human–GenAI team performance within realistic cybersecurity scenarios, while providing the needed metrics to measure cybersecurity competencies quantitatively. While this study achieved strong consensus, like any other research, several limitations were observed, including a relatively small SME panel size (n=20) and the absence of empirical testing with users. Future research will employ hands-on cyber range experiments to measure the FCCI by comparing KSTs measured across human-only and human–GenAI teams. Ultimately, this research advances cybersecurity workforce development by establishing a validated foundation for a quantitative assessment of cybersecurity competencies based on DCWF necessary for effective collaboration between humans and GenAI in defending against complex and evolving cyber threats.
Keywords: GenAI, Fundamental cybersecurity competencies, Human-GenAI teaming, Cybersecurity Knowledge, Skills, and Tasks (KSTs), Technologically savvy (T-SAVVY) soldiers
Using AI in Leading Research
- November 12th
- 10:20 PM
Artificial intelligence is rapidly reshaping the research landscape, offering scholars new tools for efficiency, synthesis, and innovation. Yet its rise also sparks concern among both students and higher education leaders about accuracy, integrity, and over-reliance. This interactive workshop explores how AI can be leveraged as a research partner, not a replacement, drawing on recent studies of undergraduate use of AI in writing and research as well as leadership perspectives on its opportunities and challenges. Participants will examine practical applications of AI across the research lifecycle—literature reviews, data analysis, and writing refinement—while engaging in discussion about ethics, guardrails, and transparency. Through live demonstrations and collaborative activities, attendees will leave with strategies, evidence-based insights, and a framework for responsibly integrating AI into their scholarly practice. By reframing AI as a tool rather than a threat, this session empowers researchers to harness its potential while preserving rigor and academic integrity.
Threat Model on Google ADK Agents
- Clark Jason Ngo, Sam Chung
- November 12th
- 3:00 PM
Agentic AI based on LLMs and generative AI marks a new frontier in autonomous capabilities but also exposes new security challenges. Google ADK, an open-source framework, facilitates the creation and deployment of intelligent agents but simultaneously expands the attack surface. This paper introduces a threat model tailored for ADK agents, using two foundational examples, and aligns these risks with the taxonomy of the OWASP ASI. Identified threats include memory poisoning, tool misuse, privilege compromise, intent manipulation, cascading hallucinations, and remote code execution risks. We propose corresponding mitigation strategies to strengthen ADK-based AI agents and ensure secure deployment of agentic systems.
Automating Bug Discovery in APKs with AI
- November 13th
- 1:00 PM
In this talk, we'll present the development of apk-ai-dive, an AI agent designed to automatically analyze APK files for potential security vulnerabilities.
The talk is divided into four key sections.
- Why we need an AI Agent: We'll start by looking at the limitations of manual security reviews and explain why an AI-driven solution is a necessary step forward.
- AI Agent Fundamentals: Get an overview of the core concepts of AI agent architecture and the fundamental steps you need to take to develop your own.
- The apk-ai-dive Story: We will cover the complete development lifecycle of our agent, sharing the specific technical challenges we encountered and how we solved them.
- Making it More Useful: The talk will conclude with a look at how we enhanced the agent's functionality and user experience by building an interactive UI.
Teaching Broader Resilience in Cybersecurity
- November 14th
- 11:10 AM
Cybersecurity education often begins with a defensive mindset: preventing breaches, patching systems, and hardening networks. While essential, this perspective risks overlooking a critical component of modern cyber strategy — broader resilience. Resilience emphasizes not just preventing incidents, but also sustaining operations, recovering quickly, and adapting to evolving threats.
This roundtable invites educators to share and explore strategies for introducing broader resilience concepts to first- and second-year cybersecurity students. Through guided discussion and collaborative scenario work, participants will examine how resilience spans technical, organizational, human, and societal dimensions. Educators will discuss how to integrate resilience into existing curricula, making abstract concepts concrete for early learners through case studies, simulations, and role-based exercises. Attendees will leave with actionable teaching activities and a framework for helping students appreciate cybersecurity not only as defense, but as a holistic approach to continuity, adaptability, and recovery.
Goals and Learning Outcomes
- Define broader resilience in terms students can grasp early in their studies.
- Explore classroom strategies (labs, case studies, role-play) that highlight resilience across technical, organizational, and societal layers.
- Identify ways to connect resilience to students' lived experiences (e.g., cloud outages, social media hacks, campus Wi-Fi downtime).
- Provide instructors with ready-to-adapt prompts and activities.
Guiding Prompts for Discussion
- How do you explain the difference between "cybersecurity defense" and "cyber resilience" to new students?
- What real-world examples (e.g., ransomware in hospitals, cloud service outages) could be used to illustrate resilience concepts?
- How can first-year students engage with resilience ideas without advanced technical skills?
- In what ways do organizational policies and human factors support or hinder resilience?
- How can we help students see resilience as a career-wide mindset, not just a technical skill?
Mini-Activity
Participants will work in small groups as instructors designing a resilience lesson. Each group receives a scenario (e.g., university hit by ransomware, power grid disrupted, major cloud provider outage). They will:
- Define the resilience challenges.
- Identify how a first- or second-year class could explore the scenario (non-technical and technical activities).
- Share one teaching strategy (case study, lab, discussion prompt, or reflection exercise) with the roundtable.
Takeaways for Educators
- A simple framework for explaining resilience to early learners.
- Example prompts and scenarios adaptable to classroom use.
- Strategies for connecting resilience to both technical and non-technical aspects of cybersecurity.
Ginger Armbruster
As the City of Seattle's Chief Privacy Officer, Ginger is Seattle Information Technology's Director of the Data Privacy, Accountability, and Compliance division, with responsibility for five citywide programs including the Privacy & Surveillance Compliance Program, the Citywide Public Records Act Program, the Open Data Program, the Compliance & Policy Program, and Responsible Artificial Intelligence Program. Prior to this role, she worked for Microsoft on an international team of privacy specialists to resolve issues associated with multi-million-dollar marketing initiatives. She spent the first 20 years of her career working in sales and marketing for Fortune 500 companies such as IBM, Hewlett-Packard and Johnson & Johnson, and several medical technology startup companies.
Jane Blanken-Webb
Jane Blanken-Webb, Ph.D., is an Associate Professor in the Doctor of Education (Ed.D.) in Educational Leadership program at Wilkes University. A specialist in John Dewey’s philosophy, she bridges her expertise in philosophy of education with a focus on cybersecurity. Jane developed a doctoral-level course, Cybersecurity for Educational Leaders, which has inspired several interdisciplinary dissertation projects exploring the critical intersection of education and cybersecurity. Her work, supported by cybersecurity experts, contributes to preparing leaders for the digital age. Prior to Wilkes, she held postdoctoral positions, including at the Information Trust Institute, University of Illinois at Urbana-Champaign.
Michael Bottorff
Mike Bottorff has more than 15 years of education and information technology (IT) experience, primarily leading technology teams in public school corporations. Mr. Bottorff currently serves as Vice President of the School of Information Technology at Ivy Tech Community College. He previously held leadership roles for Earlham College, MSD of Lawrence Township, NHJ United School Corporation, and Hamilton Southeastern Schools. An Indiana native, Mr. Bottorff earned his Bachelor of Arts in English Secondary Teaching with a computer endorsement from Purdue University and a Master of Business Administration with an emphasis in finance in 2007 from the University of Washington.
William Butler
Dr. William (Bill) Butler is the Vice President of Cyber Science Outreach and Partnerships at Capitol Technology University. Beginning in 2021, he served as the Vice President of Academic Affairs and previously, as Cybersecurity Chair for 8 years at Capitol Tech. Earlier in his career, he worked in the networking and IT industries as a network engineer and consultant for over 20 years. Dr. Butler also served as a joint qualified communications information systems officer in the U.S. Marine Corps and retired as a Colonel with 30 years of service (active and reserve). He is very active in various working groups such as the National Institute of Standards and Technology Cloud Computing Security Forum Working Group (NIST CCSFWG), Cloud Security Alliance (CSA) Big Data and Mobile Computing Working Group, and the National CyberWatch Center Curriculum Taskforce and the National Cybersecurity Student Association Advisory Board. Dr. Butler holds degrees from Brenau University, Marine Corps University, U.S. Army War College, National Defense University, University of Maryland and Capitol Technology University. He earned his DSc in Cybersecurity at Capitol in 2016 researching consumer countermeasures to illegal cellphone intercept.
Brian Callahan
Dr. Brian Callahan is a Specialist Professor in the Department of Computer Science and Software Engineering at Monmouth University. He is the Director of the Cybersecurity Research Center at Monmouth, where his research interests include business and social cases for cybersecurity, the intersection of Generative AI and cybersecurity, the intersection of Quantum computing and cybersecurity, and improving security knowledge for everyday people. He teaches a variety of cybersecurity courses ranging from red teaming to cloud security, and has coached nationally top-ranked CTF teams. He can be found online at https://briancallahan.net.
Eric Chan-Tin
Dr. Eric Chan-Tin is currently a Professor in the Department of Computer Science, the Founding Director of the Loyola Center for Cybersecurity, and the PI for the Center of Academic Excellence in Cyber Defense at Loyola University Chicago. He received his Ph.D. degree in Computer Science from the University of Minnesota and his B.A. from Macalester College. His research areas are in network security, distributed systems, privacy, anonymity, and at the intersection of cybersecurity and social sciences. He has received over $6 million in external funding for his research. He has published over 50 peer-reviewed papers, including publications at top conferences and journals such as ACM CCS, NDSS, ACM TISSEC, and IEEE TIFS. In 2020, he was recognized as a Master Researcher in the College of Arts and Sciences at Loyola University Chicago. He is the faculty advisor for the Loyola Women in Cybersecurity (WiCyS) student chapter, the Don't Panic! Computer Science club, and the cybersecurity competitions 7968 club; and coaches students to participate in cybersecurity competitions.
Quinn Colognato
Quinn Colognato is a sophomore at Rensselaer Polytechnic Institute majoring in Information Technology & Web Science and Computer Science with a focus track in Information Security. She serves on leadership for the Rensselaer Cybersecurity Collaboratory and the RPI Association for Computing Machinery Women's Chapter. She also conducts research in the Rensselaer Cybersecurity Collaboratory on the intersection between Quantum Computing and Generative AI and mentors her peers in computer science.
Zach Cossairt
I offer value with a growth mindset and strong ability to develop and apply solutions that improve decisions under risk and uncertainty. My professional intent is to apply behavioral insights with a solution-focused approach to influence the organizational change necessary to adopt innovative ways of managing risk.
Marc Dupuis
Marc J. Dupuis, Ph.D., is an Associate Professor within the Computing and Software Systems Division at the University of Washington Bothell where he also serves as the Graduate Program Coordinator. Dr. Dupuis earned a Ph.D. in Information Science at the University of Washington with an emphasis on cybersecurity. Prior to this, he earned an M.S. in Information Science and a Master of Public Administration (MPA) from the University of Washington, as well as an M.A. in Political Science at Western Washington University
His research area is cybersecurity with an emphasis on the human factors of cybersecurity. The primary focus of his research involves the examination of psychological traits and their relationship to the cybersecurity and privacy behavior of individuals. This has included an examination of antecedents and related behaviors, as well as usable security and privacy. His goal is to both understand behavior as it relates to cybersecurity and privacy, and discover what may be done to improve that behavior.
More recently, Dr. Dupuis and his collaborators have been exploring the use of fear appeals, shame, regret, forgiveness, and grace in cybersecurity, including issues related to their efficacy and the ethics of using such techniques to engender behavioral change. He has a strong track record of multi-disciplinary research and loves involving his students in his research. Security and privacy education and outreach are key components of his work.
Pamela Doran
Pamela Doran serves as the Digital Accessibility and Machine Translation Coordinator at Empire State University. She is pursuing a doctorate in Leadership and Change, focusing on the intersection of technology, language justice, and inclusive pedagogy. Her research and practice seek to transform higher education through equitable design, ethical AI use, and the integration of multilingual accessibility as a foundation for belonging and academic success.
Kendra Evans
Kendra Evans is an educator, researcher, and curriculum designer with more than a decade of experience spanning K–12 classrooms, higher education, and industry-aligned learning platforms. She has led computer science programs in higher education, guided curriculum development, and designed innovative cybersecurity courses that integrate hands-on labs, cyber ranges, and real-world applications. Her work with Codio has advanced interactive, practice-based learning experiences, while her academic leadership as a Program Chair in Computer Science has shaped pathways for students entering technology and security fields.
Evans has collaborated with the Cybersecurity and Infrastructure Security Agency (CISA) to design national K–12 cybersecurity curriculum, ensuring that complex security concepts are accessible and seamlessly integrated into everyday instruction. Her scholarship includes a recently published article in the Journal of The Colloquium for Information Systems Security Education (CISSE), where she examined the intersection of cybersecurity, pedagogy, and the workforce skills gap.
Drawing on over a decade of classroom teaching across multiple subjects and grade levels, including higher education, Evans brings a strong foundation in pedagogy and innovation to her work in cybersecurity education. She holds multiple degrees from Northwestern State University in Louisiana and is pursuing her doctorate at Arkansas State University, where her research explores equity, access, and skill development in cybersecurity pathways. She is also an active member of ISC², engaging with the global cybersecurity community to advance inclusive education and workforce development.
Her mission is to create equitable, inclusive pathways into cybersecurity, equipping educators and learners with the knowledge and skills to thrive in a rapidly evolving digital world.
Steven Furnell
Prof. Steven Furnell is Professor of Cyber Security in the School of Computer Science at the University of Nottingham. His research interests include security awareness and culture, usability of security and privacy, and technologies for user authentication. He has authored over 430 papers in refereed international journals and conference proceedings, as well as various books, book chapters, and industry reports. Amongst his various roles and responsibilities, Steve is the UK representative to Technical Committee 11 (security and privacy) within the International Federation for Information Processing, a board member of the Chartered Institute of Information Security, a member of the Steering Group for the Cyber Security Body of Knowledge (CyBOK), and the Deputy Editor of Computers & Security.
Zi Fan Tan
Zi Fan Tan is a Security Researcher at Google on the Android Red Team. He focuses on finding and exploiting vulnerabilities in the Android platform and its kernel. His work involves in-depth vulnerability research, where he looks for flaws like memory corruption or privilege escalation bugs that could harm users.
Zi Fan has a broad skill set in cybersecurity, from reverse engineering to developing custom exploits. His work is critical for proactively finding vulnerabilities before they can be exploited by malicious actors, which helps secure Android for billions of users.
Jemell Garris
My doctoral research focuses on cybersecurity culture in higher education and AI-driven multi-agent frameworks for secure governance.
I bring over 15 years of leadership experience spanning cybersecurity, technology, and higher education. I've led identity and access management modernization and directed cross-institutional cybersecurity programs across federal research environments and universities. I also serve in the Washington Army National Guard, leading technology operations and teams in high-stakes environments.
In academia, I teach database technologies, operating systems, AI fundamentals, and quality assurance—emphasizing hands-on, applied learning that connects theory to real-world practice. My work bridges technical expertise with human-centered leadership. I believe cybersecurity professionals must be not only technically skilled but also effective communicators and decision-makers who can inspire trust under pressure. Beyond teaching and research, I'm active in cybersecurity competitions, leadership development, and mentoring the next generation of technology leaders.
Emily Goldman
Emily Goldman is a junior dual major in Computer Science and Information Technology & Web Science. She participates in CTF competitions and cybersecurity research in the Rensselaer Cybersecurity Collaboratory. She has conducted research on the behaviors of ROPchains on OpenBSD systems, condensed information on vulnerabilities and mitigation suggestions for modern healthcare systems, and worked on RPI's new quantum computer to crack pseudo-RSA using Shor's algorithm. When she is not burying herself in the latest TryHackMe room, she will typically be knitting or playing Dungeons & Dragons.
Jason Gorski
Jason Gorski, Ph.D. is the President of MicroNova LLC, an embedded systems consulting firm he founded in 2012, where he specializes in FPGA-based embedded systems development across automotive, defense, aerospace, medical, and industrial sectors. Dr. Gorski holds a Ph.D. in Systems Engineering from Oakland University. He has successfully commercialized open-source hardware through MicroNova's AmpliPi whole-house audio system. His technical expertise in FPGA development, custom IP cores, DSP/ML optimization, and high-speed interfaces has enabled applications ranging from expeditionary 3D printer control and ultrasonic pipeline inspection to vision-based error correction and mini-UAV autonomous monitoring.
Vahid Heydari
Dr. Vahid Heydari is an Associate Professor of Computer Science at Morgan State University, specializing in the application of artificial intelligence in cybersecurity, moving target defense, malware analysis, and industrial control system security. He holds a Ph.D. in Computer Engineering and an M.S. in Cybersecurity from The University of Alabama in Huntsville. Prior to joining Morgan State, he was an Associate Professor and Director of the Center for Cybersecurity Education and Research at Rowan University.
Andrew Hurd
Dr. Hurd is responsible for instruction and curriculum development in the MSIT and MS in Cybersecurity programs. Prior to joining Empire University, he worked at SUNY Albany, SNHU, Excelsior University and Hudson Valley CC. Dr. Hurd holds dual Bachelors of Arts in Computer Science and Mathematics, a Masters in the Science of Teaching Mathematics, and a PhD in Information Sciences specialized in Information Assurance and Online Learning. He won the SUNY Chancellors award for Excellence in Teaching in 2012 while working at HVCC. Dr. Hurd research focus is on Cybersecurity and Computer Science education. He strives to find ways to teach cybersecurity and computer science more efficiently to the new generation of learners.
Amir Javed
Dr. Amir Javed is a distinguished cybersecurity professional, academic, and co-founder of Kesintel, a cybersecurity intelligence venture. He currently serves as a Senior Lecturer at Cardiff University, where he leads pioneering research and teaching in areas such as malware behaviour analysis, cyber risk management, and the application of artificial intelligence in cybersecurity.
Holding a PhD in Cybersecurity, focused on understanding malware behaviour and predicting attacks through online social networks, and an MSc in Information Security and Privacy, Dr Javed is also a CISSP-certified expert. His research and practical insights are underpinned by collaborations with leading organisations including GCHQ, Airbus, Thales, and The Alan Turing Institute, aimed at countering cyber threats and strengthening digital resilience.
Beyond academia, Dr Javed serves as a Domain Knowledge Expert at the Wales Cyber Innovation Hub and as a mentor at T-Hub, Telangana, where he supports start-ups and SMEs in developing innovative cybersecurity solutions. As co-founder of Kesintel, he plays a pivotal role in transforming academic research into practical intelligence tools that enable organisations to anticipate and mitigate emerging cyber risks. His research encompasses critical areas such as IoT security, phishing detection, and AI-driven threat identification, contributing substantially to advancements in the field. With over 16 years of experience, Dr Javed brings a distinctive blend of technical expertise and commercial acumen to cybersecurity. He remains deeply committed to advancing the discipline through education, innovation, and mentorship.
Ju-Yeon Jo
Ju-Yeon Jo received the Ph.D. degree in Computer Science from Case Western Reserve University, Cleveland, OH, USA, in 2003. She is currently a Professor with the Department of Computer Science, University of Nevada, Las Vegas (UNLV). At UNLV, she directs the UNLV Cyber Clinic and leads several major federally funded initiatives, including the NSF CyberCorps: Scholarship for Service program, the DOE/NNSA MSIPP project, and the NSA GenCyber programs. Her research interests include cybersecurity education, network and system security, applied cyber defense, and experiential learning models for cybersecurity workforce development.
Prof. Jo has authored numerous refereed publications and has served as principal investigator on multi-million-dollar projects supporting cybersecurity research, education, and outreach. She is the Point of Contact (POC) and lead for UNLV’s designation as a National Center of Academic Excellence in Cyber Defense (CAE-CD), building extensive partnerships with government, industry, and educational institutions to expand the cybersecurity talent pipeline.
She has served as a reviewer, program committee member and organizer for multiple conferences and journals in cybersecurity and computer science education.
Ralph Johnson
Ralph Johnson is a veteran in information security and privacy, helping shape the landscape across various high-profile roles over his 28-year career. Currently serving as the State Chief Information Security Officer for Washington, he directs the Office of Cybersecurity and advises the executive branch and the Legislature on information security issues. He plays a central role in guiding state-wide security efforts, providing leadership and strategic planning.
In past positions as Chief Information Security Officer at Nant Media LLC and Los Angelas County he was recognized for developing robust security policies and achieving the PCI DSS certification, evidence of his skill in enhancing organizational security postures. His impact during his time with Los Angeles County was profound, where he developed the county's first strategic information security plan. Ralph's leadership style balances business needs with security imperatives, creating initiatives that involve collaboration across diverse departments to foster a unified approach to security and privacy.
Ralph's professional journey also includes initiating the development of King County's information security and privacy program where he spearheaded numerous initiatives, including the deployment of enterprise-wide security measures and the development of comprehensive training programs for over 3,500 staff members. His proactive strategies significantly reduced risks and safeguarded county information assets.
Beyond his executive roles, Ralph has made significant contributions to the academic field as an instructor at the University of Washington and ITT Technical Institute, where he taught Information Systems and Cybersecurity. His commitment to education is also evident as a sponsor and patron of the Holistic Information Security Practitioner Institute, where he has served as a board president and certified trainer, inspiring the next generation of security professionals.
Ralph has earned degrees from Eastern Oregon University and the San Francisco College of Mortuary Science and has numerous professional certifications in IT and security. His contributions have been recognized with several awards, including the ISE West "People's Choice" Executive of the Year.
Yair Levy
- Author
- Organization
- Profile
- Network
Dr. Yair Levy is a Professor of Information Systems and Cybersecurity at Nova Southeastern University's College of Computing, AI, and Cybersecurity. As the Director of the Center for Information Protection, Education, and Research (CIPhER), he spearheads initiatives in cybersecurity research and education. With over 10,000 citations, Dr. Levy's research focuses on cybersecurity threat prevention, cyber risk assessments, with a particular emphasis on human-factor aspects. Throughout his career, Dr. Levy has held various leadership roles, including President of the Association for Information Systems' Special Interest Group on Information Security and Privacy (SIGSEC). He is a senior member of IEEE and has also serving as a cybersecurity expert for federal agencies and local government groups, and as a member of the FBI's InfraGard South Florida Chapter. Dr. Levy has graduated over 60 Ph.D.s in Information Systems and Cybersecurity Management and has published numerous papers in peer-reviewed journals and conference proceedings. He is the recipient of several awards, including the Professor of the Year award and the NSU Provost's External Funding Recognitions. As a sought-after expert, Dr. Levy regularly provides keynote talks and media commentary on cybersecurity topics. Learn more about his work via https://sites.nova.edu/levyy and https://infosec.nova.edu/cylab/.
Ella Luedeke
Ella Luedeke is a senior Computer Science student at the University of North Florida. With a minor in Leadership, she is involved with the Hicks Honors College, Honors in the Major, and Upsilon Pi Epsilon Honors Society. She is a Peer Assisted Student Success Leader, facilitator for the Honors Colloquium course, and President of the Women in Cybersecurity club. She is also a student editor of the Florida Undergraduate Research Journal and a Student Ambassador for UNF’s College of Computing, Engineering, and Construction. She participated in the NSF Research for Undergraduate Fellowship program at the University of Texas at Arlington and the University of North Carolina at Charlotte under Dr. Meera Sridhar. Ella currently conducts research with Dr. Indika Kahanda on the application of large language models for automating student feedback. She hopes to pursue a PhD in Computer Science and further research in applicable AI systems.
Herbert Mattord
Herb Mattord, Ph.D., CISM, CISSP completed 26 years of IT industry experience before joining the faculty at Kennesaw State University in 2002. He was the Manager of Corporate Information Technology Security at Georgia-Pacific Corporation, where much of his practical knowledge in information security was acquired. He is on the faculty at Kennesaw State University with the rank of Professor where he teaches Information Security and Cybersecurity. He serves as the Associate Director of the KSU Center for Cybersecurity Education. He is the co-author of several books published by Course Technology and an active researcher in information security management topics.
Aanya Mehta
Aanya Mehta is a junior at Rensselaer Polytechnic Institute, pursuing a dual major in Computer Science and Information Technology & Web Science with a focus track in Information Security. She has conducted research on the intersection of Generative AI and Quantum Computing, fine-tuning large language models with quantum techniques. Last summer, she interned at Google, developing a full-stack web application and building a data pipeline to aggregate and transform complex datasets. She is passionate about solving real-world challenges through innovative technology and looks forward to applying her skills to create impactful solutions.
Erik Moore
My career blends institutional leadership with cybersecurity/IT operations leadership in industry, government, and academia. I'm the Program Director of the Online MS in Cybersecurity Leadership at Seattle University, engaging with partners, mentors, and industry to provide students with the opportunity to accelerate and focus their careers.
Research interests include:
- Any research leading to a resilient cyber-empowered global society
- Defensive cyber architectures using graph theory that work to address the asymmetrical problem
- Cyber psychology & sociology related to incident response, training, and other adversarial contexts
- Immersive & Collaborative Cyber education experiences in 3D virtual spaces and other empowering environments of high validity in relation to career experiences
In cyber operations, I've provided executive director-level leadership in government, transforming cybersecurity, regional IT infrastructure, education technologies, and fleet services. I've trained National Guard cyber units in multiple states (including red-teaming and scenario design) and I design virtual world avatar-based scenarios. Also, I've provided incident response analysis to the National Guard Bureau in Washington DC.
Sateesh Nutulapati
Sateesh Nutulapati is Director of Hosting & DevOps at New Target, Inc., where he leads enterprise, government, and non-profit web application hosting and infrastructure security. With over two decades of experience in IT and cybersecurity operations, he manages secure environments requiring strict compliance with frameworks such as FedRAMP, HIPAA, and PCI, while architecting scalable and resilient hosting platforms. Alongside his technical leadership, Sateesh is an independent researcher exploring human factors in cybersecurity. Drawing on his master’s degrees in psychology and neuroscience from King’s College London, his work investigates how stress, fatigue, attention, and decision-making biases contribute to cybersecurity risks.
Weihao Qu
Dr. Weihao Qu is an Assistant Professor of Computer Science and Software Engineering at Monmouth University. His research focuses on program analysis for security, gamified cybersecurity education, and AI in computing education.
Before joining Monmouth, he worked at Meta on Zoncolan, a large-scale static analysis tool for detecting security vulnerabilities. His current work, supported by the NSF CRII Award (NSF 2451348), explores formal verification of relational quantitative properties in programs and its potential security application. Related work can be found: https://weihaoqu.com.
Harini Ramaprasad
Dr. Ramaprasad is a Teaching Professor and Associate Dean for Undergraduate Programs and Student Success for the College of Computing and Informatics of the University of North Carolina at Charlotte. She holds a B.S. degree from Bangalore University, and M.S. and Ph.D. from North Carolina State University. Her research interests are primarily in computer science education, with recent work focusing on student learning and engagement in cybersecurity education.
Michelle Ramim
Michelle Ramim is an assistant professor if Health Informatics at Dr. Kiran C. Patel College of Osteopathic Medicine, Nova Southeastern University. She was the founding director of the Bachelor of Science in Health Informatics, and actively chairs both the Health Informatics Bachelor and Master curriculum committees. She mentors Medical and Doctor of Philosophy Students, as well as chairs the annual students research symposium. Dr. Ramim research focuses on investigating applied health informatics issues related to threats to wearable and implantable medical devices, remote patient monitoring and systems implementation, data privacy and ethics, risk assessment, as well as information security governance. Her research has been sponsored by the National Security Agency. Dr. Ramim published over 40 papers in peer-reviewed journals and conference proceedings over the past 20 years with nearly 730 citations. Dr. Ramim completed various training programs provided by the Federal Bureau of Investigation (FBI), and actively serves as a Board Member and Healthcare Sector Chief at the FBI/ InfraGard South Florida. Additionally, she serves as a board member and Student Education chair, Student Scholarship and Annual Innovation Award competition at the Healthcare Information and Management Systems Society (HIMSS), as well as nationally on the Cybersecurity, Privacy and Security HIMSS committee. Dr. Ramim is also a frequent invited keynote speaker at national and international meetings, including the American Osteopathic College of Occupational and Preventative Medicine.
Kenyatte Simuel
Kenyatte Simuel has thirty years of experience in Information Technology, spanning industries such as manufacturing, healthcare, automotive, education, and technology. Over the course of his career, Kenyatte has led teams on various projects involving Technology Infrastructure, application development, and Compliance.
His duties have included:
- Leading Compliance efforts within organizations
- Managing large teams both locally and internationally
- Negotiating and selecting vendor contracts
- Creating balanced scorecards to track key performance indicators
- Encouraging continuous improvement and applying industry best practices
- Developing security policies
- Working with vendors to improve savings on operational and capital costs
Kenyatte holds degrees from Purdue University and Dakota State.
He currently serves as a Department Chair at Ivy Tech Community College, where he oversees Cybersecurity, Network Infrastructure, Cloud Technologies, and IT Support. He is the main contact for Ivy Tech's CAE-CD Center and chairs the college's Cybersecurity Curriculum Committee.
Meera Sridhar
Dr. Meera Sridhar is an Associate Professor in the Department of Software and Information Systems at UNC Charlotte. Dr. Sridhar received her Ph.D. in computer science from the University of Texas at Dallas and her B.S. and M.S. degrees in computer science from Carnegie Mellon University. Dr. Sridhar has more than 20 years of experience in software and systems security, language-based security, formal methods, cyber-physical systems security, and cybersecurity education. Her research is funded by the National Science Foundation, the NC General Assembly, and NSA; her work has been published in top security and formal methods venues.
Dr. Meera Sridhar serves as the Director of the Center for Energy Security and Reliability (CESAR), a state-funded, collaborative initiative between UNC Charlotte, North Carolina State University, and North Carolina A&T University. In this role, she oversees research and education efforts focused on protecting critical energy infrastructure and developing resilient, sustainable power grids. Her leadership includes fostering partnerships with industry and government, securing research funding, and guiding workforce development to prepare future professionals in energy security.
As Director of the external-facing Smart Home IoT Lab, Dr. Sridhar not only advances research on IoT security for home and urban environments but also supports faculty and students in research, teaching, and outreach in IoT. The lab actively engages in outreach programs for K-12 schools, community colleges, and underrepresented groups, helping to broaden participation and awareness in IoT security and technology fields.
Shoshana Sugerman
Shoshana Sugerman is a student at Rensselaer Polytechnic Institute, pursuing a Bachelor of Science in Information Technology & Web Science with a focus track in Information Security. She has hands-on experience in cybersecurity and quantum computing, serving as a lead researcher on projects involving augmented reality cybersecurity training, Grover's algorithm for threat detection, and ethical considerations of AI in security. Shoshana also competed in the National Cyber League, ranking in the top percentiles. Proficient in various programming languages and technologies, she actively contributes to advancing cybersecurity and AI-driven educational methodologies.
Hsiao-an (Justin) Wang
Dr. Hsiao-an Wang is a cybersecurity educator and researcher with a Ph.D. in Computer Science from Marquette University. His work centers on offensive and system security, cybersecurity education, and the development of adaptable teaching tools and hands-on learning experiences. He designs custom Capture-the-Flag challenges to foster engagement and practical skill development, and his recent research explores integrating limited-context large language models (LLMs) into higher education to address challenges arising from students’ overreliance on AI technologies.
Xinli Wang
Dr. Xinli Wang is an associate professor, where he brings deep expertise in cybersecurity, artificial intelligence in cybersecurity, and cybersecurity education. His work focuses on advancing the use of AI in digital forensics and information assurance, helping students stay on the leading edge of this rapidly evolving field. Dr. Wang is dedicated to developing dynamic, hands-on learning experiences. He regularly is updating his course materials with the latest tools, technologies, and industry practices. Dr. Wang also serves as a mentor for graduate students on their capstone projects, fostering the next generation of cybersecurity professionals.
Dr. Wang enjoys teaching courses in digital forensics and cybersecurity. Outside the classroom, he enjoys walking daily and finding balance through an active lifestyle.
Morgan Zantua
Morgan Zantua is the Director of the Center for Cybersecurity Innovation at City University of Seattle, Program Manager for the Master of Science in Cybersecurity and Bachelor of Science in Cybersecurity, and Associate Professor at the School of Technology & Computing (STC). Morgan is the Principal Investigator (PI) on multiple grants to expand cybersecurity career pathways opportunities through teacher development and cybersecurity career exploration. STARTALK - Korean integrates culture, language, web design, Cybersecurity and Python programming for high school and college Korean language learners. She convenes teams to create innovative and integrated solutions to attract transitioning military and the Center actively involved in an SBA Cyber clinic and VICEROY. She is faculty advisor to the CityU WiCyS. Morgan's research interests include Evidencing Competencies through Competitions with Dr. Daniel Manson, psychometric profiling of cybersecurity work roles, and strategies for transitioning military into cyber roles. She holds a master's degree in Whole Systems Design and thirty years' experience in workforce development.
Xin Zhao
Xin Zhao is a Security Researcher on the Android RED Team at Google. He specializes in developing tools to automate bug discovery and improve the security of the Android platform. His work involves creating and fine-tuning fuzzers to find vulnerabilities in new code, as well as building static analyzers and QL queries to identify potential bugs before they are even released. By leveraging AI agents, he's at the forefront of using machine learning to detect and prevent security flaws. Xin's research helps Google's security teams find and fix weaknesses, ultimately making Android a safer and more secure platform for billions of users worldwide.
Vaibhav Agrawal
Vaibhav Agrawal is a cybersecurity expert with over 12 years of experience specializing in Software, Mobile, and LLM Security. Currently a Security Engineer at Google, he leads key security projects for Fitbit and Google Home, focusing on protecting user data and privacy. Beyond his work at Google, Vaibhav is a senior member of the IEEE, a dedicated open-source contributor, and a speaker at security conferences such as BSides. He earned his master's degree from San Jose State University.
Adonnis Alexander
I am Adonnis Alexander from San Antonio, Texas. Currently, I serve as the Vice Chair of Franklin University's ACM chapter and the Vice Chair of the ACM's cybersecurity special-interest group (SIG). I am obtaining my degree in Cybersecurity with an interest in AI on the side.
Mohit Chandarana
Mohit is Codio's Data Science and AI specialist. From generating insightful learning analytics for CS Educators to prototyping novel product features and algorithms, he believes in bridging the gap between cutting-edge academic research and its application in the industry.
Artem Protsenko
Artem Protsenko is a Ukrainian student and AI enthusiast who loves building tools that make life easier and more engaging. As the lead developer of AvaSec—an AI-powered chatbot for the CISSE community—he played a key role in bringing the project to life, using technologies like FastAPI, LangChain, and ChromaDB to help conference participants access real-time information. With experience in Python, C/C++, and web development, Artem enjoys turning ideas into working solutions and is always exploring new ways to make technology more accessible and impactful.
Sarah Zerpa
Service Desk Analyst at the University of Tampa with a strong foundation in technology and a passion for bridging art and innovation. My diverse background spans industries such as retail, logistics, healthcare, and case management, equipping me with a unique perspective on problem-solving and user-centered design. I've participated in immersive learning experiences through Bioinformatics and AI4ALL bootcamps and hold certifications in SQL and CompTIA A+. Excited to contribute to initiatives like the AI Chatbot project, I aim to help bring impactful ideas to life while fostering smarter, safer digital ecosystems.
Marquee & Academic Partner
Seattle University, located in the heart of the Pacific Northwest's thriving technology corridor, is a nationally recognized independent university known for its academic excellence, social justice leadership, and commitment to educating the whole person. With strong programs across science, engineering, business, law, and the humanities, the university fosters interdisciplinary collaboration and civic engagement. Its central location in Seattle provides students and faculty with meaningful connections to industry, government, and nonprofit sectors, making it a vibrant hub for innovation and public service.
Academic Partner
As a private nonprofit institution of higher education, City University of Seattle's mission is to change lives for good by offering high quality and relevant lifelong education to anyone with the desire to learn. CityU's vision is to be the destination of choice for accessible, career-focused education with a focus on equipping a diverse student population with 21st century skills and technology tools.
Gold Sponsor
At Codio, we fuse computing education research with AI to deliver learning experiences that truly build job-ready skills. By combining evidence-based pedagogy, intelligent technology, and immersive design, we create a new standard of “better tech skills learning.” The results speak for themselves: higher completion rates, increased time on task, stronger grade attainment, and learners who graduate with the confidence and capabilities today's workforce demands.
Gold Sponsor
International Council of E-Commerce Consultants, also known as EC-Council, is the world's largest cyber security technical certification body. We operate in 145 countries globally and we are the owner and developer of the world-famous Certified Ethical Hacker (C|EH), Computer Hacking Forensics Investigator (C|HFI), Certified Security Analyst (ECSA), License Penetration Testing (Practical) programs, among others. We are proud to have trained and certified over 300,000 information security professionals globally that have influenced the cyber security mindset of countless organizations worldwide.
Exhibitor
The Master of Cybersecurity and Leadership (MCL) program at the University of Washington Tacoma equips professionals and military personnel with technical backgrounds to enhance their leadership and cybersecurity skills for career advancement. By integrating resources from the School of Engineering & Technology and the Milgard School of Business, the program fosters innovative solutions for information assurance and cybersecurity challenges, positioning graduates for success and entrepreneurial opportunities in Washington's cybersecurity landscape.
Area Partner
Discover the perfect location for your stay in Seattle at the Silver Cloud Hotel Seattle – Broadway. Situated in the vibrant Capitol Hill neighborhood, our hotel offers convenient access to Seattle University, Swedish Medical Center, and downtown Seattle, where you can explore the city's top-notch dining, shopping, and entertainment options. Don't forget to make a stop at the lively Pike Place Market during your visit.
Sponsorship Opportunities
CISSE™ offers a distinctive platform for showcasing your organization with precision, targeting not just cybersecurity enthusiasts, but the educators in cybersecurity. For 29 years, the esteemed members of CISSE™, including those deeply invested in educational methodologies, have convened to unravel the complexities of teaching emerging subjects. Place your tools and resources in the hands of these distinguished individuals and demonstrate how you can bolster their mission.
Session 1 – AI for Threat Detection & Synthetic Media Defense
-
10:20 AM - November 12th
A case study for combating student overuse of Generative Artificial Intelligence in Cybersecurity educational activities using Augmented Reality Capture-the-Flag development
Shoshana Sugerman, Sanya Joseph, Quinn Colognato, Mary Cotrupi, Aanya Mehta, Tanvi Mehta, Emily Goldman, Ishneet Kaur, Victoria Cai, Gabriel Bezerra, Adam Kaplan, Arielle Revis, Lala Liu, Samuel Leung, Elif Kulahlioglu, Rachel Schneider, Mikah Schueller, Quinn Sharp, James Porvaznik, Brian Callahan -
10:40 AM - November 12th
-
11:00 AM - November 12th
From Creation to Detection: How Dataset Properties Impact Deepfake Model Performance
Lauren Matthews, Idongesit Mkpong-Ruffin, Deidre Evans, Chutima Boonthum-Denecke -
11:20 AM - November 12th
Study of AI Object Detection: Patterns on Animals with YOLO and Adversarial Patches
Aniya Hopson, Chutima Boonthum-Denecke, Idongesit Mkpong-Ruffin -
11:40 AM - November 12th
Session 2 – AI-Driven Operations & Convergent Security
-
2:00 PM - November 12th
-
2:20 PM - November 12th
-
2:40 PM - November 12th
-
3:00 PM - November 12th
-
3:20 PM - November 12th
Self-Hosted Workflow Automation For AI-Based Cybersecurity Operations
Hareign Casaclang, Bianca Ionescu, Yoohwan Kim, Ju-Yeon Jo
Session 3 – Cyber Risk Awareness & Human Behavior
-
4:00 PM - November 12th
Analysis of Cybersecurity Risks and Teenage Digital Behavior Patterns
Eric McCloy, Samuel Nimako-Mensah, Albert Samigullin -
4:20 PM - November 12th
CodeWars: Using LLMs for Vulnerability Analysis in Cybersecurity Education
Arunima Chaudhary, Walter Colombo, Amir Javed, Junaid Haseeb, Vimal Kumar, Fernando Alva Manchego, Richard Larsen -
4:40 PM - November 12th
-
5:00 PM - November 12th
-
5:20 PM - November 12th
Roll with it: Awareness raising with Cyber Defence Dice
Steven Furnell, Lucija Šmid, James Todd, Xavier Carpent, Simon Castle-Green
Session 4 – Cybersecurity Workforce Development & Frameworks
-
10:20 AM - November 13th
Building Nuclear-Specific Cybersecurity Expertise in Higher Education
Amorita Christian, Myles Nelson, Tiffany Fuhrmann -
10:40 AM - November 13th
-
11:00 AM - November 13th
Mapping the Gap: Analysis of Nuclear Cybersecurity Education in U.S. Universities
Myles Nelson, Amorita Christian, Tiffany Fuhrmann, Charles Nickerson -
11:20 AM - November 13th
Play NICE: Incorporating Cyber Phraseology into K-12 Education
Timothy Crisp, John Hale -
11:40 AM - November 13th
Session 5 – Phishing, Awareness & Applied Security Training
-
2:00 PM - November 13th
-
2:20 PM - November 13th
CyberGLA: Protection Against Advanced AI-Powered Phishing Threats
Weihao Qu, Gurmeet Singh, Daniel Crawford, Bingjun Li, Jalen Smith -
2:40 PM - November 13th
Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)
Abel Ureste, Hyungbae Park, Tamirat Abegaz -
3:00 PM - November 13th
Development and Validation of a Healthcare Workers Phishing Risk Exposure (HWPRE) Taxonomy for Mobile Email
Christopher Collins, Yair Levy, Gregory Simco, Ling Wang -
3:20 PM - November 13th
Teaching Endpoint Protection through Wazuh: A Project-Based Approach to Cybersecurity Education
Sara Sutton, Victor Kipchirchir Bunge, Xinli Wang, Johnfia Frank, Esther Djan
Session 6
-
4:00 PM - November 13th
-
4:20 PM - November 13th
Cybercamp: An Experience Report on the Transformations of an Intensive Cybersecurity Summer Camp for High School Students
Jose R. Ortiz Ubarri, Rio Piedras, Rafael A. Arce Nazario, Rio Piedras, Kariluz Dávila Diaz -
4:40 PM - November 13th
Examining the Capabilities of GhidraMCP and Claude LLM for Reverse Engineering Malware
Joshua Cole Watson, Bryson R. Payne, Denise McWilliams, Mingyuan Yan -
5:00 PM - November 13th
Integrating Vulnerability Assessments with Security Control Compliance
Ernesto Ortiz, Aurora Duskin, Noah Hassett, Clemente Izurieta, Ann Marie Reinhold
Session 7 – Generative AI in Cybersecurity Education
-
4:00 PM - November 13th
Cybersecurity Education with Generative AI: Creating Interactive Labs from Microelectronic Fundamentals to IoT Security Exploitation
Kushal Badal, Xiaohong Yuan, Huirong Fu, Darrin Hanna, Jason Gorski -
4:20 PM - November 13th
Distributed Agency in AI-Enhanced Cybersecurity Education: A Posthuman Instructional Design Framework
Ryan Straight, Josh Herron -
4:40 PM - November 13th
-
5:00 PM - November 13th
Toward Experiential Training Program for AI Security and Privacy Practitioners
Mohammed Abuhamad, Mujtaba Nazari, Loretta Stalans, Eric Chan-Tin -
5:20 PM - November 13th
An AI Agent Workflow for Generating Contextual Cybersecurity Hints
Hsiao-An Wang, Joshua Goldberg, Audrey Fruean, Zixuan Zou, Ruoyu Zhao, Sakib Miazi, Jens Mache, Ishan Abraham, Taylor Wolff, Jack Cook, Richard Weiss
Session 8
-
4:00 PM - November 13th
Interactive Cybersecurity Lab: Hands-on Cybersecurity Training
Chrstian Soucy, Danai Chasaki -
4:20 PM - November 13th
-
4:40 PM - November 13th
Validating the fundamental cybersecurity competency index (FCCI) through expert evaluation for human-generative artificial intelligence (GenAI) teaming
Witko, Yair Levy, Catherine Neubauer, Gregory Simco, Laurie Dringus, Melissa Carlton
- 10 Mar 2024
- 28 Oct 2025
- 14210



