Data and privacy

When AI goes low, go high: How ethical AI fights unethical AI

AI is much maligned for its many nefarious uses, but where AI can be a problem, it may also be a solution.

Share article

ethical data AI privacy

“This is the world now. Logged on, plugged in, all the time,” said Jason Clarke in 2015 movie Terminator: Genisys set in 2029. He might as well have been talking about today. Devices have become so integral that today’s world wouldn’t be possible without artificial intelligence (AI.) AI powers everything from smart homes to customer service to airport security.

In Terminator, sentient artificial neural network Skynet sends robots back in time from 2029 to kill the humans who will, in the future, lead the resistance against it. Technology today might be quickly catching up to some aspects of Terminator’s fictional 2029, but AI experts say we’re a long way from evil, self-aware humanoid robots taking over the world. AI coming close to human intelligence is such a long way off that CEO of Allen Institute for Artificial Intelligence Oren Etzioni says it’s “like being worried about overpopulation on Mars before we even have gotten a person on Mars.”

But there’s more to AI than what science fiction shows us. Surveillance and cybercrime applications of AI are concerning, but AI can also strengthen systems and improve data privacy. Like in Terminator, real-life AI can be ethical or unethical.

How can AI be ethical?

As well as keeping databases and systems safe, ethical AI can influence human behavior, ‘nudging‘ people to make better decisions.

Geoff Webb, vice president for solutions, product and marketing strategy at AI-enabled payroll firm isolved, says, “We should expect to see greater use of AI not only to protect privacy, but to enhance ethical behavior. A good example is the use of AI to evaluate things like diversity and inclusion within a business. We’ll see AI moving from protecting data to helping us see what that data means.”

If ethical AI can nudge us to make better decisions, nefarious AI could nudge us into making unethical decisions.

“Businesses are looking to implement machine learning which gives them a smart, efficient and effective way of querying all of the data they hold about people and chop that data up in various ways that can be useful to them. But we need to recognize criminals can also make use of this,” says David Emm, Principal Security Researcher at Kaspersky in audio series Fast Forward by Tomorrow Unlocked.

Anonymizing and analyzing data more privately

Companies collect mountains of data going about their business, and each of us generates data when we check a message from a friend, browse the internet or use an app to check the weather. Some of that data will be sensitive and confidential. To analyze that data, someone’s had to look at it. But AI might be able to fix that.

AI models can scrub data of identifying characteristics and draw conclusions from large data sets. Francesco Di Cerbo, AI privacy research lead at SAP Security Research, said, “It’s like looking at blurred shadows of people. We can still see how many there are, their heights and postures, but not their faces. We can extract information like average and distribution of heights, but it will be hard to identify them. For example, an AI technique like Named Entity Recognition (NER) can detect a date in a sentence, but it might be an invoice date or birth date. Our models must know if that date is personal information. Once different pieces of information are detected in a sentence, we proceed to anonymization.”

Using AI tools to anonymize data keeps it safe from inadvertent human slips or breaches. “Well-written AI tools don’t get distracted, share data when they shouldn’t or forget to delete sensitive data, but people do, all the time,” says Webb.

Humans make mistakes. AI (mostly) doesn’t.

The weakest points of a database are always those where humans are involved. A Kaspersky survey found 46 percent of security incidents that led to business data being exposed were linked to employee error.

AI could mean human operators don’t need to see sensitive data, giving the processing and sharing of data stronger privacy and security.

“AI tools can operate on the data without exposing it to humans. Analysis within platform means reduced risk of someone needing to copy it or print it, or accidentally exposing it,” says Webb. “If the platform is secure, granting an AI tool access is often less risky than a human pawing through the data.”

AI can’t be bribed into revealing healthcare data or details of a celebrity’s divorce proceedings, for instance. It doesn’t leave sensitive data sitting around on a desk or unsecured device.

Ethical AI in cybersecurity

In 1999, a 15-year-old hacked into NASA with the US Department of Defense, downloading software worth 1.7 million US dollars and shutting systems down for three weeks. Over two decades later, security systems are more sophisticated and harder to break into, but cybercriminals are smarter too. The days of teenagers hacking into government databases are gone – cyberattacks today are dynamic and customized.

ethical data AI privacyTraditional security can fail against targeted attacks, but AI can strengthen fortifications and respond more effectively to phishing attacks or breaches. Di Cerbo says, “Automatic defense approaches using AI are highly effective.” AI is especially effective in password protection and user authentication.

Defensive AI can rapidly detect and contain any emerging cyberthreats, and it’s the best way to fight back when AI is part of the attack method. According to Capgemini’s report AI in Cybersecurity, hackers use AI for more sophisticated attacks, including spear phishing Tweets – personalized Twitter messages sent to targeted users tricking them into sharing sensitive information. AI can send Tweets six times faster than a human, with twice the success. But while AI can be used for foul play, it also deters these attacks. “Alongside traditional methods, AI is a powerful tool for protecting against cybersecurity attacks,” says Capgemini.

AI can’t do everything, but its potential is great

AI is far from an alternative to all human oversight. AI programs have famously replicated human biases – from facial recognition systems not recognizing People of Color to AI recruitment tools not wanting to hire women.

But well-designed AI can create a more secure online world, keep data safe and nudge humans towards more ethical behavior. “Security, privacy and compliance will remain business challenges because the counter-pressures are always changing – hackers develop new tools, new privacy expectations and legislation evolve,” says Webb.

“While businesses could try to manage these risks without AI, it would be complex and expensive. And the more humans involved in processes, the more likely human error or malicious activity will result in a breach,” Webb continues. “AI tools can’t do everything, but they can deliver better insight more quickly and with less risk of expensive and ugly data exposure.”

US technology historian Melvin Kranzberg‘s first law of technology famously says, “Technology is neither good nor bad, nor is it neutral.” There may be no greater example of a technology more often called good, bad and non-neutral than AI, and perhaps fairly. So AI is what tech has always been.

As in Terminator: Genysis, when good AI robots battle bad AI robots to save humans, today we see AI on both sides of the ethical fence. Using this power wisely isn’t a question of if we use it, but how.

KASPERSKY ENTERPRISE SECURITY

Protect your business against advanced threats.

About authors

Aishwarya Jagani is a freelance tech writer based in India. She writes on cybersecurity, science and the human impact of technology.