Submission Deadline: 01 December 2025 View: 64 Submit to Special Issue
Dr. Tinghui Ouyang
Email: ouyang.tinghui.gb@u.tsukuba.ac.jp
Affiliation: Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan
Research Interests: Data sciences, Machine learning, LLM, Anomaly detection
Large Language Models (LLMs) have achieved remarkable performance across a wide range of natural language processing tasks. However, their widespread deployment also introduces significant security and robustness concerns, including hallucination, textual out-of-distribution (OOD) detection failures, adversarial vulnerabilities, data poisoning, and privacy leakage. This special issue aims to bring together cutting-edge research that addresses these challenges, ensuring the safe and secure use of LLMs in real-world applications.
We invite high-quality, original research papers and review articles addressing, but not limited to, the following topics:
· Hallucination in LLMs: Understanding and mitigating fabricated or misleading content generation in LLMs.
· Textual OOD Detection and Defense: Identifying and mitigating issues when LLMs encounter out-of-distribution inputs.
· Adversarial Attacks and Robustness: Techniques for detecting, defending, and mitigating adversarial manipulation of LLM-generated content.
· Poisoning Attacks on Training Data: Understanding how malicious data injections impact model behavior and exploring countermeasures.
· Privacy and Confidentiality Risks: Analyzing risks related to unintentional leakage of sensitive or proprietary information.
· Trustworthy AI for LLMs: Developing explainable, interpretable, and auditable frameworks to enhance LLM security.
· Secure Model Fine-Tuning and Deployment: Ensuring secure transfer learning, reinforcement learning, and continual learning in LLMs.
· Evaluation Metrics and Benchmarks: Designing robust security evaluation frameworks for LLM safety and performance.