Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)

Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)
Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)
File Size:
1.18 MB
Author:
Abel Ureste, Hyungbae Park, Tamirat Abegaz
Date:
13 April 2026
Downloads:
34 x
AI is being interconnected with vital systems at an exponential rate, being described as the greatest shift in technology since the invention of the Internet. However, with the emergence of AI also involves the introduction of new critical vulnerabilities in the technology sector. This research will discuss the types of prompt injection attacks that AI can be subjected to, what they target and the possible repercussions of prompt injections. To counteract these attacks, solutions to detect different types of prompt injection will also be discussed, giving solutions to mitigate attacks that can expose critical data. Along with the solutions, different trade-offs between these solutions will be given. This research aims to expose the security issues that arise with the rapid implementation of experimental AI involving prompt injection and how to prevent it.
 
 
Powered by Phoca Download