Skip to main content

ChatGPT and Generative AI – a balancing act between too much vs too little caution

ChatGPT and Generative AI – a balancing act between too much vs too little caution

Genie Sugene Gan

Head of Government Affairs & Public Policy, APJ & META, Kaspersky

The world went into a flurry with ChatGPT earlier this year, because it was deemed to be a technological milestone with far-reaching benefits for many users around the world.

But with each advancement comes unintended consequences. With ChatGPT in its fourth iteration, and its underlying technologies growing fast, it’s time to take stock of the policy landscape to understand its implications.

What is ChatGPT and how is it used?

ChatGPT is an AI chatbot made by OpenAI. It uses natural language processing (NLP) to create human-like text responses. Users can ask ChatGPT almost anything and receive relatively natural-sounding responses. It can write outputs like programming code, social media posts, summarize complex topics or almost anything text. ChatGPT is an example of generative AI: Artificial intelligence that learns to generate original content like images, music and text by identifying underlying patterns in existing examples.

Many top companies are using ChatGPT. For example, travel outlet Expedia wants customers to be able to book a holiday in a way that feels like having a chat with a travel agent. Microsoft, a major investor in OpenAI, says ChatGPT already powers its search engine, Bing, and plans to add it to Word and Excel. In a nutshell, ChatGPT could be promising for any application where natural, conversational interaction between people and tech would be beneficial.

Risks of ChatGPT

Not trying to be a ‘party pooper’ here, but as the world pays attention on the benefits of such generative AI tools, the underlying cybersecurity risks are hardly spoken about.

Kaspersky’s specialist threat intelligence teams have identified three main risk areas:

  1. ChatGPT can provide advice, even detailed guides, to help prospective attackers plan and target victims.
  2. It can develop code and even create entire coding projects for those with little knowledge of software development. While ChatGPT has restrictions and safeguards to avoid abuse, they are fairly easily bypassed.
  3. ChatGPT produces convincing and accurate text that can be used in phishing or spearphishing.

Policy perspective beyond risks & AI regulation

From a policy perspective, despite its risks, there is practically no way to prevent or limit the use of technologies such as ChatGPT. And to limit an emerging technology’s potential means being left behind in a global digital economy.

And so we embrace technology. But at what cost? We enter a dilemma and a never-ending debate.

The cost of being left behind will only increase as technologies develop fast into convenient tools that improve life and work. But an overly permissive environment that ignores credible risks could be equally damaging.

While policy discussions usually focus on outputs – the material ChatGPT produces – its inputs (the data it’s trained with) may be a bigger threat. Data-driven tools like ChatGPT are only as good or accurate as their training datasets. As chatbot systems may become the most common way to use the internet, issues will have wide impact.

Ensuring balance in training data is a growing challenge as disinformation expands. Bad actors can influence what users see and read in ways that have global consequences. Datasets, devices and systems must be secure and accurate.

In our public policy work, Kaspersky advocates regional and international cooperation, as well as public-private partnerships. We think partnerships combining global private-sector expertise with policymakers’ local needs will be the best defense. It’s a middle-ground approach that avoids having to choose between too much or too little caution. These partnerships in cybersecurity maximize advantages of emerging technologies like ChatGPT, while safeguarding against risks.

United Arab Emirates (UAE) is a role model for inclusive public-private partnerships. They’ve taken a progressive and collaborative stance on partnerships with international tech firms. Initiatives and forums like the Dubai Chamber of Digital Economy and the UAE Cybersecurity Council aim to safely and securely advance technological growth for short- and long-term benefit.

In May 2023, Brazil was considering a Bill to regulate AI. It requires “systems [to] undergo a preliminary assessment carried out by the suppliers themselves, in order to determine whether they can be classified as being of high or excessive risk,” according to OneTrust Data Guidance. It also restricts exploitative AI uses, like subliminal techniques that could cause harmful behavior or targeting vulnerabilities in older and disabled people.

More recently, in July 2023, Australia invited the industry to provide feedback as it inspects the considerations which should go into supporting responsible AI. And in August 2023, Singapore likewise opened for public consultation its draft Advisory Guidelines on the Use of Personal Data in AI Systems under the Personal Data Protection Act (PDPA) – aimed at clarifying how the PDPA applies to the collection and use of personal data by organizations to develop and deploy AI systems with machines learning models – as well as its draft Advisory Guidelines on the PDPA for Children’s Personal Data, seeking to address matters concerning obtaining children’s consent and implementing additional protection standards amongst other things.

The Center for Strategic and International Studies says Japan looked at the European Union’s sterner approach to AI regulation and had “concern that the burden of compliance and the ambiguity of regulatory contents may stifle innovation.” Instead, Japan is adopting a “human-centered” approach based on seven principles – among them, privacy protection, security and innovation. They regulate AI through several laws. The Digital Platform Transparency Act, for example, requires fair and transparent practice from large online stores and digital advertising businesses, including disclosing how they decide search rankings.

Values are the key to balanced regulation

ChatGPT and other generative AI fit a model of technological development we have seen throughout history – it disrupts human lives and businesses with exciting potential but also serious risks that need policymakers’ attention.

The balancing act between too much or too little regulation is a tough one. I posit that ultimately, values should be the key driver to balanced regulation. By and large, the countries I had cited above have taken regulatory approaches that are values-led and aim for a middle ground: A balance between too much and too little caution. Alongside, partnerships between the public and private sector hold promise to let business embrace innovation while adequately managing risk.

ChatGPT and Generative AI – a balancing act between too much vs too little caution

Kaspersky logo