The course is developed by Kaspersky AI Technology Research Center and aims to equip cybersecurity professionals with the essential skills to understand, evaluate and defend against vulnerabilities in large language models (LLMs).
The advent of LLMs has revolutionized the way companies develop and interact with Artificial Intelligence (AI) systems, while simultaneously introducing new and intricate security challenges. A Kaspersky study has revealed that already in 2024 more than half of companies have implemented AI and Internet of Things (IoT) in their infrastructure. As these technologies become increasingly embedded in corporate systems and processes, understanding how they can be targeted, what vulnerabilities can be exploited for cyber attacks and how to defend against them is no longer optional—it's a vital skill for cybersecurity professionals.
To address this critical need, Kaspersky has expanded its renowned Cybersecurity Training portfolio with a new online course dedicated to the security of LLMs. Designed to provide a solid foundation in this emerging field, the course equips professionals with the expertise to assess vulnerabilities, implement effective defenses and design resilient AI systems. Participants will engage with real-world cases and practical assignments, honing their ability to deploy robust security measures and enhance the resilience of LLM-based applications.
The course draws upon the extensive expertise of the Kaspersky AI Technology Research Center, whose specialists have been dedicated to AI in cybersecurity and secure AI for nearly two decades—advancing the detection and mitigation of a wide spectrum of threats. Led by Vladislav Tushkanov, Research Development Group Manager at Kaspersky, the program offers an engaging learning experience through compelling video lectures, practical hands-on labs and interactive exercises, enabling participants to...
- Explore exploitation techniques such as jailbreaks, prompt injections and token smuggling to understand how to defend against them.
- Develop practical defense strategies across various levels—model, prompt, system and service.
- Apply structured frameworks for assessing and enhancing LLM security.
This training is useful for those starting their careers in AI cybersecurity, engineers building or integrating LLMs, and specialists working closely with AI infrastructure.
"The rise of large language models has revolutionized the approach taken by organizations to building and engaging with AI, opening new horizons of possibility. Yet, this technological leap also brings intricate security puzzles that demand immediate attention. For cybersecurity experts, mastering the art of spotting, exploiting and shielding against these vulnerabilities has become a vital craft. That’s why we developed this specialized course—to arm professionals with the insights and hands-on tools necessary to safeguard LLM-driven applications and stay one step ahead of the evolving threat landscape," says Vladislav Tushkanov.
To enroll to the course and learn more about the training program, please follow the link.