Skip to main content

AI Hacking: Types of AI Cybercrime and How to Stay Safe

A robot using a laptop with a warning alert on the computer screen

Is artificial intelligence safe? It’s arguably the most popular topic in the world of technology right now, and it’s one that is attracting significant debate.

At a global summit in Paris in February 2025, views about AI safety differed among world leaders. For example, the president of France, Emmanuel Macron, called for further regulation to help AI move forward, while the vice president of the United States, JD Vance, said that “pro-growth AI policies” should be prioritized over safety concerns.

While legitimate businesses, authorities and people are exploring ways to use AI responsibly and safely, there’s also another threat: the use of AI by cybercriminals to make their malicious activities even more powerful and successful.

AI cybercrime is a game-changer for security across the digital landscape, and it’s vital that everyone is aware of the risks and the actions that are needed to stay safe. This guide explores the different types of AI cyberattacks, what these attacks mean from a security standpoint, and what you can do now to shore up your defenses.

Examples of successful AI hacking

The risk of AI-generated malware and other criminal activity is much more than theoretical - these attacks are already happening and claiming victims worldwide. Here are just three of a huge number of examples:

Impersonation: Guido Crosetto

In early 2025, scammers used AI-generated voice technology to impersonate the defence minister of Italy, Guido Crosetto. They then contacted wealthy entrepreneurs in Italy with a fictitious plea for financial support that would supposedly be used to help free Italian journalists kidnapped in the Middle East. At least one businessman was fooled into sending money into what proved to be a fraudulent account.

Malware: DeepSeek ClickFix scam

One cybercriminal operation has set up a verification web page that mimics the new Chinese AI DeepSeek. Users who complete what they think is a Captcha verification command the installation of Vidar Stealer and Lumma Stealer, malware that can be used to steal sensitive information, including logins and bank details, ultimately resulting in online banking fraud.

User error: Samsung’s ChatGPT leak

While not an AI virus or malware attack as such, this high-profile incident highlighted how the door can inadvertently be opened to AI attacks. In the spring of 2023, staff of the Korean technology giant Samsung used ChatGPT to check some of its source code, without realizing that the data would be retained to further train the AI model. This effectively put some of Samsung’s highly sensitive intellectual property into the public domain for anyone to access.

Types of AI cyberattacks

There are already several different ways in which AI can be used for malicious activity, either directly or in support of other criminal enterprises:

Social engineering

A social engineering attack is where criminals attempt to influence human behavior and get users to give up information or assets to them voluntarily. This could be anything from sensitive personal data, bank account information, money or cryptocurrency, or access to certain devices and databases. AI is proving very helpful in this kind of attack, helping cybercriminals identify targets, develop convincing personas and messaging, and create audio and video recordings that add plausibility to the scam.

Phishing attacks

Phishing attacks rely on users thinking that the messages, links, and email attachments they’ve received are real and clicking on what turns out to be malware. Good user education has helped reduce the success of these attacks in the past, but AI is now being used to make phishing attempts more convincing than ever. It can be used in real-time communications such as WhatsApp messages or social media communications, and even in spoof customer service chatbots where customers think they’re sharing their account details with real members of staff.

AI ransomware

AI has substantially expanded the technical capabilities of ransomware attacks. This includes target research, such as assessing systems for the most promising vulnerabilities to exploit, and the ability to adapt ransomware files so that they can still evade the detection of cybersecurity solutions.

Adversarial AI and malicious GPTs

Many cybercriminals distort the output generated by AI by feeding them inaccurate data or tampering with the settings of the AI model. The inaccuracies introduced can lead to dangerous biases or instructions being created that suit the hacker’s objectives.
Alongside this, there has been a rise in the use of malicious AI tools that don’t contain any safeguards to protect against bias and misuse; GhostGPT is a recent example of an uncensored chatbot that can be used to create AI malware.

Deepfakes

You might think that deepfakes are only a concern for public figures whose faces and voices are put into fictitious, compromising positions, but the threat to everyone is real. Deepfake technology can be used to impersonate anyone through video, voice, or a combination of the two. These can potentially be used to gain access to sensitive information or even bank accounts by fooling security and verification procedures.

Fighting fire with fire: what AI cybercrime means for security?

The best tool to use against AI cyber threats is artificial intelligence itself. That doesn’t mean that standard security solutions are powerless to prevent AI-supported attacks: they will still be able to protect against many kinds of malware and hacking. However, bringing AI into the security mix can add an extra dimension to several security methods, such as:

Incident response

AI can be used to automate incident response activities. For example, Kaspersky Incident Response can block malicious activity, and affected systems can be isolated far more quickly than a human responding to a system alert could manage. Drastically reducing response and resolution times can minimize the impact of any attack, which ultimately limits the scale of disruption, expense, and data loss.

Threat detection and proactive threat hunting

With so many threats around, it can be hard for security teams to keep up with everything that’s out there and investigate every alert. This is where AI can help, analyzing user behavior and network traffic in detail, spotting potential threats faster, and filtering out the false positives that cost security teams valuable time. AI can also make threat detection proactive, hunting down cyberattacks before they even begin.

Malware analysis

The insights that AI can generate can give security teams a much better, in-depth understanding of how different types of malware work in practice. Being able to automate this analysis means security teams can access these insights much faster and can make more informed decisions about where and how their defenses are designed and deployed.

Vulnerability scanning

AI tools can assess systems and infrastructure to work out where the biggest vulnerabilities lie, so that security teams can better prioritize their time and resources in shutting those vulnerabilities down. This information is vital for better patch and update management, ensuring that the most pressing upgrades are installed first.

Identity and Access Management (IAM)

The strength of Identity and Access Management has been substantially bolstered by AI through its ability to spot unusual patterns of user activity from different locations for access attempts to the speed of typing and mouse movement. These anomalies can be flagged up in real-time to block any potential unauthorized access and/or to generate additional layers of verification to confirm legitimate usage.

Ethical hacking

Ethical hacking and  penetration testing are crucial tools in discovering vulnerabilities and closing them off proactively. However, it can be a slow and time-consuming enterprise just because of the size of the workload involved. AI can lighten the admin burden of these tasks, however, in two important ways: automating the simpler and more repetitive tasks that slow teams down and analyzing their results to deliver actionable insights.

Risk assessments

The same benefits of AI for ethical hacking can also be applied to developing risk assessments and prioritizing threats according to the level of danger. This can also be a mundane, time-consuming task (as important as it may be), so using AI to speed up the process, take care of repetitive jobs, and drive more detailed insights can be extremely useful.

How to protect yourself from AI cyberattacks

Alongside the AI-based security capabilities above, there are plenty of ways to minimize the chances of AI hacking being successful. From our expert standpoint, the best way forward brings together people, planning, and technology:

Create an incident response plan

The faster you can respond to an incident, the quicker you can mitigate any impact and ensure that the damage doesn’t spread too widely. Just like a fire drill, you should know exactly what to do when a security incident occurs.
Your plan should encompass preparation (i.e., prevention and how to respond), detection and analysis (confirming the nature and severity of the attack), containing and eradicating the attack (system isolation, remediation, and patching), and recovery tactics that prevent the incident from happening again. Learn more about how to build an effective strategy with Kaspersky’s Incident Response solutions.

Use the strongest security platforms available

Ensure that you have the most comprehensive security protections in place to defend you from existing threats and new attacks based on AI cybercrime. Kaspersky Premium, for example, provides real-time anti-virus protection that immediately takes action to prevent an attack; it also includes Identity Protection that further safeguards sensitive personal data.

Assess security posture regularly

With both AI and cybercrime evolving at a rapid rate, the security measures you have in place today may not be sufficient 12 months down the line (or potentially even sooner than that). This means that security should be audited and reviewed regularly; AI can support this through real-time analysis of user activity and the spotting of any anomalous activity compared to the previous audit period.

Maintain user and employee security awareness

The continued success of phishing attacks - whether developed using AI or not - underlines the fact that good security begins with users. Now is the time to make users aware of the risk of AI hacking and related attacks, especially around just how convincing AI-generated text, audio, and video can be. This can help remind users to stay extremely vigilant and not conduct any activity unless they are 100% sure that it’s safe to do so. Explore how Kaspersky’s Cybersecurity Training can help your team stay vigilant against AI-driven attacks.

Related Articles:

Related Products:

AI Hacking: Types of AI Cybercrime and How to Stay Safe

AI is a powerful tool for both good and bad. Explore how cybercriminals are leveraging AI for attacks, and what you can do to protect yourself.
Kaspersky logo

Related articles