Safer business

With ChatGPT’s vast innovative potential, regulators must take care

Its human-like responses are set to revolutionize our experience of tech, so business needs a clear view of the policy landscape around ChatGPT.

Share article

chatGPT cybersecurity regulation

ChatGPT and other artificial intelligence (AI) chatbot applications are major technological milestones with far-reaching benefits for many industries.

But with each advance comes unintended consequences. With ChatGPT in its fourth iteration, and its underlying technologies growing fast, it’s time to take stock of the policy landscape to understand its implications.

What is ChatGPT, and how are businesses using it?

ChatGPT is an AI chatbot made by OpenAI. It uses natural language processing (NLP) to create human-like text responses. Users can ask ChatGPT almost anything and receive relatively natural-sounding responses. It can write outputs like programming code, social media posts, summarize complex topics or almost anything text.

What is ChatGPT?

ChatGPT is an example of generative AI: Artificial intelligence that learns to generate original content like images, music and text by identifying underlying patterns in existing examples. “Its primary objective is to create interactive and realistic conversations with users, making it a powerful tool for chatbots, virtual assistants and customer support applications,” says AWS Solutions Architect Ragu Kuppannan.

Many top companies are using ChatGPT. For example, travel outlet Expedia wants customers to be able to book a holiday in a way that feels like having a chat with a travel agent. Microsoft, a major investor in OpenAI, says ChatGPT already powers its search engine, Bing, and plans to add it to Word and Excel.

ChatGPT could be promising for any application where natural, conversational interaction between people and tech would be beneficial.

Risks of ChatGPT

Critiques of ChatGPT and other chatbot applications generally focus on its potential to disrupt human employment, produce incorrect information or poor practice in training algorithms to identify harmful content.

The underlying cybersecurity risks get less attention. Kaspersky’s specialist threat intelligence teams have identified three main risk areas.

First, ChatGPT can provide advice, even detailed guides, to help prospective attackers plan and target victims. Second, it can develop code and even create entire coding projects for those with little knowledge of software development. While ChatGPT has restrictions and safeguards to avoid abuse, they’re fairly easily bypassed. Third, ChatGPT produces convincing and accurate text that can be used in phishing or spearphishing.

To some extent, these three areas represent theoretical risks or outlier scenarios that don’t represent how people and businesses usually use ChatGPT.

ChatGPT is only a language model, which is neither good nor bad — but it is advanced, publicly available and popular.

Policy perspective beyond risks

Policy should aim to be technology-blind – taking a similar approach to all related technology that might emerge in the future, rather than creating a unique approach for one of today’s concerns. It should build the best possible defenses against the worst human intentions and applications.

Practically speaking, there may be no way to prevent or limit use of tech like ChatGPT. And to limit an emerging technology’s potential means being left behind in a global digital economy.

The cost of being left behind will only increase as technologies develop fast into convenient tools that improve life and work. But an overly permissive environment that ignores credible risks could be equally damaging.

While policy discussions usually focus on outputs – the material ChatGPT produces – its inputs (the data it’s trained with) may be a bigger threat. Data-driven tools like ChatGPT are only as good or accurate as their training datasets. As chatbot systems may become the most common way to use the internet, issues will have wide impact.

Ensuring balance in training data is a growing challenge as disinformation expands. Bad actors can influence what users see and read in ways that have global consequences. Datasets, devices and systems must be secure and accurate.

How nations are regulating AI

In our public policy work, Kaspersky advocates public-private partnerships. We think partnerships combining global private-sector expertise with policymakers’ local needs will be the best defense. It’s a middle-ground approach that avoids too much or too little caution. These partnerships in cybersecurity maximize the advantages of emerging technologies like ChatGPT, while safeguarding against risks.

United Arab Emirates (UAE) is a role model for inclusive public-private partnerships. They’ve taken a progressive and collaborative stance on partnerships with international tech firms. Initiatives and forums like the Dubai Chamber of Digital Economy and the UAE Cybersecurity Council aim to safely and securely advance technological growth for short- and long-term benefit.

In May 2023, Brazil was considering a Bill to regulate AI. It requires “systems [to] undergo a preliminary assessment carried out by the suppliers themselves, to determine whether they can be classified as being of high or excessive risk,” according to OneTrust Data Guidance. It also restricts exploitative AI uses, like subliminal techniques that could cause harmful behavior or target vulnerabilities in older and disabled people.

The Center for Strategic and International Studies says Japan looked at the European Union’s sterner approach to AI regulation and had “concern that the burden of compliance and the ambiguity of regulatory contents may stifle innovation.” Instead, Japan is adopting a “human-centered” approach based on seven principles – among them, privacy protection, security and innovation. They regulate AI through several laws. The Digital Platform Transparency Act, for example, requires fair and transparent practice from large online stores and digital advertising businesses, including disclosing how they decide search rankings.

Values are the key to balanced regulation

ChatGPT and other generative AI fit a model of technological development we’ve seen throughout history – it disrupts human lives and businesses with exciting potential but also serious risks that need policymakers’ attention.

UAE, Brazil and Japan have regulatory approaches that are values-led and aim for a middle ground: A balance between too much and too little caution. Alongside, partnerships between the public and private sector hold promise to let business embrace innovation while adequately managing risk.

Cyber Espionage Prevention

National cybersecurity that meets the most stringent security requirements, ensuring supreme protection for critical infrastructure.

About authors

Genie leads Government Affairs and Public Policy for Kaspersky in Asia-Pacific, Middle East and Africa. She’s an award-winning policy professional and international speaker. She develops trusted relations with government, institutional stakeholders and enterprises by integrating business, communications and public policy strategies with effective advocacy.