LAMA-WeST-Lab

Explainable Insider Threat Detection by leveraging Large Language Models

Explainable Insider Threat Detection by leveraging Large Language Models

Speakers: Alexis Brissard, Parvin

Topics: Insider Threat Detection, LLM

Abstract

Insider threats are a major cybersecurity challenge, demanding more effective and efficient detection methods. The recent adoption of Large Language Models (LLMs) in ITD has opened up new possibilities for monitoring and interpreting insider behaviors that could signal potential threats. However, LLMs have limitations, such as the risk of generating incorrect or misleading information. In this study, we conduct an in-depth comparative analysis to evaluate classification performance (binary and multiclass) on the CERT r4.2 dataset as the most used dataset in ITD. We performed feature engineering on the dataset and compared all the evaluation results with the raw data and processed data. Then, we uncover performance impacts by leveraging BERT, Llama 3, and Phi3 models. Additionally, we consider ITD a generative task, and we apply Chain-of-Thought (CoT) prompting with Llama 3, comparing its performance with current state-of-the-art benchmarks. Finally, we introduce an ontology-guided and explainable ITD model designed to improve explainability and enhanced transparency to deliver clear, interpretable results for enhanced decision-making. This study not only highlights the potential of LLMs in ITD but also underscores the critical need for explainability in threat detection models.

Previous post
Knowledge Graph Construction from LLMs
Next post
Leveraging LLMs to Extract Process Models from Emails & Post-Generation Memory Retrieval for SPARQL