Skip to main content

AI POLICY IN 2024: WHAT CAN BE EXPECTED AND WHAT SHOULD BE DONE?

AI POLICY IN 2024

Yuliya Shlychkova, Director of Global Public Affairs, Kaspersky

Andrey Ochepovsky, Public Affairs Manager, Kaspersky

In recent years, AI has been the hottest topic in technological policy, discussed at both national and international levels. At the same time, the regulatory architecture of AI is still in development – at both national and international levels. Consequently, actors that manage to take the lead in this process are in pole position to shape the AI regulatory framework to their advantage.

Unsurprisingly, the year 2023 saw numerous countries and international organizations taking steps to promote their vision of AI regulations with the unannounced but implied goal to set an attractive example for other countries to follow. The most notable initiatives of this kind were developed by G7 members (the Hiroshima AI Process) and China (the Global AI Governance Initiative). The UN has also been trying to keep up with the process, by establishing a High-Level Advisory Body on AI under its auspices.

Meanwhile, growing interest in establishing the “rules of game” for AI has also been observed at a regional level. In Europe, the work on the EU’s AI Act is underway. If enacted, the Act would introduce a risk-based approach for the classification of AI systems, distinguishing “intolerable” (ones that may pose a threat to fundamental values and human rights), “high-risk” (for example, AI systems utilized in critical infrastructure) and “limited or minimal risk” ones (recommendation services, for instance). In South East Asia, a guide to AI ethics and governance is being developed by the ASEAN. In Africa, the African Union has drafted the AI continental strategy, which is scheduled for adoption in 2024.

In addition to this, individual countries are also looking to establish a framework (either non-binding or obligatory) to secure the safe and sustainable use of artificial intelligence. This trend can be monitored around the globe – from Brazil and Canada to Thailand and New Zealand.

Such a multitude of initiatives leads to increasing risks of fragmentation in the global AI regulatory landscape, which, in turn, threatens to hinder transnational cooperation on AI and to deepen the gap between leaders in AI development and the rest of the world. Although some prominent AI nations including the USA, China and the EU pledged to cooperate in ensuring the safe use of artificial intelligence (the Bletchley Declaration), whether these commitments manage to stand the test of time and the existing complicated geopolitical situation remains to be seen.

What can be expected with regards to AI policy?

It is clear that artificial intelligence is here to stay for years and decades to come. Accordingly, the demand for a developing regulatory framework in this domain will increase throughout the world in two ways. First, more countries and international organizations are expected to embark on this road in the coming year. In particular, the spotlight will be on African and Asian countries, many of which are still in the process of laying the foundations for AI regulations domestically but are already actively engaged in discussions on the topic. Second, countries and international organizations with an existing framework in this sphere will expand it by adopting more detailed norms tailored to concrete aspects of artificial intelligence (creation of training datasets, use of personal data, etc.) or its use in specific sectors (government, critical infrastructure, etc.), as this can be regarded as the logical next step in developing a comprehensive AI regulatory regime.

At the same time, it is worth noting that the AI regulatory landscape is far from homogenous, as analysis of existing initiatives reveals at least two camps. The first one is represented primarily by the EU’s AI Act which includes a “risk-based approach”, a legal ban on use of the most “dangerous” AI systems and imposing penalties for non-compliance. Another AI bill was also introduced in Brazil. In contrast to the EU Act, this approach favors the “carrot” rather than the “stick” prioritizing recommendations and non-binding guidelines rather than imposing bans and strict regulations. Considering what we have seen until now, “the competition” between these two groups is likely to intensify. Due to profound differences, it is difficult to imagine that “restrictive” and “enabling” approaches will be combined, leading to establishment of “the third way” that will suit all interested parties.

As a result, the risk of fragmentation of the global AI regulatory landscape is here. This threat has already been perceived by some major players in AI domain who signed the above-mentioned Bletchley Declaration as an attempt to promote the uniformity in this sphere. However, geopolitical tensions, the escalation of which can be observed currently, are likely to have a negative impact on intergovernmental dialogue in this sphere, thus derailing efforts to overcome potential global fragmentation of AI regulation.

So, what should be done to avoid all existing and potential pitfalls and to make AI regulation effective?

First and foremost, all AI-related regulatory initiatives should prioritize safety and privacy in the development and use artificial intelligence. Since its “birth”, AI has been inextricably linked in the public consciousness with the potential risks (both real and imaginary) that it may bring. In particular, questions such as “Why does AI need personal data for training?”, “What should be done to minimize risks associated with using personal information in training datasets?” and “How can the risks of potential AI errors be mitigated?” require clear answers and proposed solutions. To meet these demands, government bodies throughout the world must put safety and privacy at the forefront of any future initiatives to regulate artificial intelligence. This will foster an environment where AI can be used safely and where individual rights are respected, calming those who fear the risks posed by use of artificial intelligence.

We are also suggesting that private stakeholders should play an important role in developing AI-related norms and practices. Non-state actors (primarily those in the corporate sector) have already accumulated vast expertise in developing and using artificial intelligence and, can therefore provide an invaluable contribution to discussing AI regulation – both at global and national levels. The exchange of views between public and private sectors on artificial intelligence can be conducted in various formats (public consultations, work of advisory bodies, etc.), as this model clearly benefits all parties. On the one hand, it is difficult for governments to keep up with the rapid progress of AI and refine existing regulatory frameworks on time without the cooperation of other interested parties. On the other hand, private stakeholders also benefit from this dialogue, as it provides an excellent opportunity to share their ideas on the best ways of regulating artificial intelligence.

Finally, a broad international dialogue on AI regulation is urgently needed. While artificial intelligence opens vast opportunities for the development of humanity as a whole, the potential fragmentation of the AI regulatory landscape could substantially hinder the opportunities global progress that these technologies can offer. We therefore call for an intensified global discussion on AI policy among governments with the participation of relevant private stakeholders. Our view is that this measure will at least help in the alignment of the core principles of AI regulation worldwide and, consequently, establish common “rules of engagement” in this sphere.

AI POLICY IN 2024: WHAT CAN BE EXPECTED AND WHAT SHOULD BE DONE?

Kaspersky logo

Latest Articles