Abstract
Artificial Intelligence (AI) has become an increasingly powerful tool in various domains, particularly in image classification and object detection. As AI advances, novel methods to deceive machine learning models, such as adversarial patches, have emerged. These subtle modifications to images can lead to misclassification of objects, posing a substantial challenge to their reliability. In this paper, we present our research findings and literature on adversarial examples and object detection.
This research builds upon the previous work by investigating the impact of small patches on object detection using YOLOv8. We started by exploring patterns within images and their influence on model accuracy. Then a follow-up study evaluating how adversarial patches, particularly those targeting animal patterns, affect YOLOv8’s ability to accurately detect objects. Additionally, we explore how untrained patterns impact the model’s performance, aiming to identify vulnerabilities and enhance the robustness of object detection systems.
Open Access License Notice:
This article is © its author(s) and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), regardless of any copyright or pricing statements appearing in the PDF. The PDF reflects formatting used for the print edition and not the current open access licensing policy.
