Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)
Cover - CISSE Volume 13, Issue 1
PDF

Keywords

artificial intelligence
AI
prompt injection
prompt mitigation
Ollama

Abstract

AI is being interconnected with vital systems at an exponential rate, being described as the greatest shift in technology since the invention of the Internet. However, with the emergence of AI also involves the introduction of new critical vulnerabilities in the technology sector. This research will discuss the types of prompt injection attacks that AI can be subjected to, what they target and the possible repercussions of prompt injections. To counteract these attacks, solutions to detect different types of prompt injection will also be discussed, giving solutions to mitigate attacks that can expose critical data. Along with the solutions, different trade-offs between these solutions will be given. This research aims to expose the security issues that arise with the rapid implementation of experimental AI involving prompt injection and how to prevent it.

PDF

Open Access License Notice:
This article is © its author(s) and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), regardless of any copyright or pricing statements appearing in the PDF. The PDF reflects formatting used for the print edition and not the current open access licensing policy.