Special Issues
Table of Content

Utilizing and Securing Large Language Models for Cybersecurity and Beyond

Submission Deadline: 01 June 2025 View: 25 Submit to Special Issue

Guest Editors

Dr. Yue Zhang

Email: zyueinfosec@gmail.com

Affiliation: Department of Computer Science, Drexel University, Philadelphia, PA 19104, USA 

Homepage: yue.zyueinfosec.com

Research Interests: LLM Security 

图片4.png


Dr. Kaidi Xu

Email: kx46@drexel.edu

Affiliation: Department of Computer Science, Drexel University, Philadelphia, PA 19104, USA

Homepage:

Research Interests: trustworthy AI 

图片5.png


Dr. Minghui Xu

Email: mhxu@sdu.edu.cn

Affiliation: School of Computer Science and Technology, Shandong University, Qingdao 266237, China 

Homepage:

图片6.png


Summary

As technology evolves, so do the challenges and opportunities in the realm of cybersecurity. In recent years, the emergence of Large Language Models (LLMs) has presented a new frontier in addressing cybersecurity threats and advancing security practices. LLMs contribute to security and privacy. For example, they aid in secure coding, test case generation, vulnerable code detection, and code fixing throughout the code's lifecycle. However, LLMs can also be used for various attacks that threaten both security (e.g., malware attacks) and privacy (e.g., social engineering). Additionally, LLMs may be subject to AI Model Inherent attacks (e.g., data poisoning, backdoor attacks) and Non-AI Model Inherent attacks (e.g., remote code execution, side channels), emphasizing the urgent need to explore more defenses for current LLMs. This special issue addresses security and privacy issues resonating with its readership, including researchers, practitioners, and policymakers in the field of LLMs. By comprehensively studying these issues, this special issue aims to offer valuable insights and solutions to explore using LLM for security and privacy and, more importantly, to mitigate risks associated with LLMs. This special issue will cover a wide range of topics related to the security and privacy of large language models, including but not limited to:

· Utilizing LLMs for secure code generation

· Leveraging LLMs for program testing

· Employing LLMs for vulnerability detection and remediation

· Harnessing LLMs for malware detection and defense

· Ensuring data security and privacy with LLMs

· Understanding and defending against jailbreaks targeting LLMs

· Addressing bias in LLMs and defending against it

· Exploring the security and privacy of LLM agents

· Defending against attacks leveraging LLMs

· Privacy-preserving techniques for LLMs

· Adversarial attacks and defenses in LLMs

· Ensuring fairness, transparency, and accountability in LLMs

· Examining regulatory frameworks and policy implications for LLMs

· Considering ethical considerations in the development and deployment of LLMs


Keywords

Large Language Models Security, Vulnerable Code Detection in LLMs, Secure Code Generation with LLMs, Adversarial Attacks on LLMs, Privacy-Preserving Techniques for LLMs, Jailbreak Defenses in LLMs, Bias Mitigation in LLMs Security, LLM Agent Security, Malware Defense Leveraging LLMs, Transparency and Accountability in LLM Security

Share Link