E-mail senden E-Mail Adresse kopieren
2024-09-27

When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS

Zusammenfassung

We scrutinize the effects of “blind” adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker – unable to access and hence unaware that the ML-NIDS is weakened by concept drift – attempts to evade the ML-NIDS with data perturbations. It is currently unknown if the cumulative effect of such adversarial perturbations and concept drift leads to a greater or lower impact on ML-NIDS. In this “open problem” paper, we seek to investigate this unusual, but realistic, setting—we are not interested in perfect knowledge attackers. We begin by retrieving a publicly available dataset of documented network traces captured in a real, large (>300 hosts) organization. Overall, these traces include several years of raw traffic packets—both benign and malicious. Then, we adversarially manipulate malicious packets with “problem-space” perturbations, representing a physically realizable attack. Finally, we carry out the first exploratory analysis focused on comparing the effects of our “adversarial examples” with their respective unperturbed malicious variants in concept drift scenarios. Through two case studies (a “short-term” one of 8 days; and a “long-term” one of 4 years) encompassing 48 detectors, we find that, although our perturbations induce a lower detection rate in concept-drift scenarios, some perturbations yield adverse-effects for the attacker in intriguing use-cases. Overall, our study shows that the topics we covered are an still an open problem which require a re-assessment from future research.

Konferenzbeitrag

ACM Workshop on Artificial Intelligence and Security (AISec)

Veröffentlichungsdatum

2024-09-27

Letztes Änderungsdatum

2024-10-08