Publications and conference presentations will be listed here as they are completed.
Purdue University · Doctor of Technology
Building Explainable AI Systems for Cyber Threat Intelligence and Security Analytics
I am a cybersecurity researcher and doctoral student in the Doctor of Technology program at Purdue University. My academic background includes a degree in Computer Engineering Technology, a Master of Science in Information Technology Management with an emphasis in Information Assurance, and a dual Master of Business Administration. These academic experiences are complemented by more than twenty years of professional experience in threat intelligence, security operations, and enterprise cyber defense.
My research focuses on the intersection of artificial intelligence, knowledge representation, and cyber threat intelligence. In particular, I investigate the use of neuro-symbolic AI and fuzzy logic inference to make AI-generated threat intelligence more reliable, trustworthy, and explainable. I also explore how AI-assisted ontology engineering can enhance the ability of security systems to interpret and reason about complex threat data.
Developing structured approaches to threat actor identification and intelligence lifecycle automation using AI-driven pipelines.
Designing knowledge representations that capture the semantics of cyber threats, enabling structured reasoning over intelligence data.
Building AI systems that produce transparent, interpretable outputs so human analysts can trust and verify machine-generated intelligence.
Integrating symbolic reasoning with neural architectures to achieve robust, uncertainty-aware AI that bridges the gap between data-driven and logic-based methods.
Investigating vulnerabilities in machine learning models, including data poisoning, prompt injection, and supply chain attacks on AI systems.
"Enhancing Cyber Threat Intelligence through Explainable AI: A Design Science Approach Using Knowledge Graphs, Ontologies, and Large Language Models"
My doctoral research addresses a critical gap in cyber threat intelligence (CTI) operations: the lack of transparency and explainability in AI-driven threat detection. Using a Design Science Research methodology, I am developing a framework that integrates knowledge graphs, ontology alignment, and large language models to improve network threat detection and attribution. The framework embeds explainable AI directly into the intelligence construction process, enabling analysts to understand not just what a model detected, but why it reached that conclusion. Central to this work is the use of imbalanced learning techniques to handle the highly skewed class distributions common in real-world threat data, alongside structured representations drawn from MITRE ATT&CK and STIX to ground the system in operational standards.
This research is conducted under the advisement of Dr. Julia Rayz, Professor and Associate Department Head in the Department of Computer and Information Technology at Purdue University, and a CERIAS Fellow specializing in natural language understanding, knowledge representation, and fuzzy logic.
Publications and conference presentations will be listed here as they are completed.
An explainable cyber threat intelligence framework that integrates LLMs, knowledge graphs, and ontologies for automated threat analysis and attribution.
Developing reasoning systems that align CTI ontologies (STIX, ATT&CK, D3FEND) for cross-framework threat intelligence correlation and analysis.
Building systematic evaluation frameworks for assessing LLM performance on cybersecurity-specific tasks, including threat report summarization and indicator extraction.
Interested in collaboration, speaking opportunities, or discussing research? Reach out below.