With new technologies like the metaverse arriving regularly, how can business leaders and cybersecurity professionals know what new threats to expect? Futurism – analyzing future scenarios to inform today’s decision-making – is one way.
Victoria Baines is professor of information technology at UK’s Gresham College and a former Facebook executive and Europol officer. She uses future scenarios to understand technology today and how we should prepare for threats to come. We chat about what these scenarios reveal about our world today, what might happen with new technologies like the metaverse and what business can do to keep users safe.
Gemma: What does it mean to explore the future of cybersecurity through scenarios?
Victoria: Scenarios are not predictions, but if you do them right, they tell you as much about your situation now as what might happen in future.
It’s not just threats – it’s how the world’s changed, and what the tipping points may be. The pandemic was a tipping point for remote and hybrid working, but many find hybrid work just means doing all the things, all the time – the expectation they’ll be available constantly for online and physical meetings. We’d been working up to that for a long time. Reliance on email was a compelling signal we needed to find something better, quieter, with less churn.
I think we’ll see something like that with the metaverse. We’ve now got the ultimate business case for it because Zoom and Teams aren’t enough to feel like we’re really working with people – we’re still just on a call. Futures methodology is as much about exploring our present and spotting things in the past.
Cybersecurity futures reports always say, ‘Next year we’ll see more ransomware, more supply chain attacks,’ same as the previous year. We need something more imaginative. It’s bigger than more threats – society is changing, and we need to plan for that.
What have you found out about the metaverse through your cyber futures work?
Although there’s hype and uncertainty around the metaverse, we have certainties about how it will develop. For example, Apple will probably bring out, or at least announce, a mixed-reality headset next year. However expensive, people want to use whatever Apple brings out. So we’ll see people wanting Apple’s mixed-reality headset in a way they haven’t wanted other virtual reality (VR) headsets like Hololens or Oculus. This will significantly speed up metaverse adoption.
The metaverse is less of a tipping point and more of an evolution. Roblox, Fortnight and Second Life are already metaverses. They use the feeling of presence and co-presence, and there’s old research on this – academics like Mel Slater and Ralph Schroder showed us that being together virtually is emotional – it brings about positive reinforcement.
We’ll need AI-powered synthetic individuals to bump into in online social spaces because we need to feel like we’re interacting with people – shaking hands, having sex and so on. We’ll probably need 6G to power and connect all that technology, particularly if we want to use it when out and about.
What security implications have you identified in the metaverse?
Nothing’s 100 percent secure – we must assume there’ll be technical and human vulnerabilities in every aspect of VR, augmented reality (AR) and other metaverse-enabling technologies.
In these scenarios, we look at all those vulnerabilities. We have signals already, like the panic around ‘zoombombing‘ – hackers interrupting video conferencing with obscene and violent material. It happened because people shared their passwords on the open internet. We needn’t panic – basic security measures would deal with it. But it shows that exploitation and infiltration will also play out in VR and AR.
Are there new cyberthreats for the metaverse and related immersive technologies, or is it the same threats in new spaces and technologies?
Much of it comes back to basic digital hygiene like patching, updating, and installing antivirus software. Basic ‘handwashing’ can do much to prevent ransomware infections. In that way, the new stuff isn’t new, but we haven’t given it enough attention.
You can fall into the trap of thinking the metaverse will be uniquely immersive and people will be uniquely psychologically harmed. But they’re experiencing this with existing technology – we’ve underestimated the emotional impact of being hacked.
We’ve viewed online harm as less impactful – as it’s not physical abuse, we treat it less seriously. With metaverse technologies, you can physically sense resistance and impact, which means opportunities for physical assault. That has an operational impact for information security and cybersecurity.
What implications do these threats have for cybersecurity leaders today?
Technical information and security aspects become more important in addressing psychological and physical harms in these connected spaces. Pacemaker interference with medical internet of things (IoT) signals that. We already have people walking around with internet-connected medical devices like defibrillators, insulin pumps and continuous glucose monitors. If those don’t function properly, and if somebody dies, the question of responsibility arises. If it hasn’t already happened, Chief Information Security Officers will be asked, ‘What did the data say about any interference with that pacemaker? Was the firmware up to date?’
We can see something similar playing out in the metaverse. Everyone will say it’s someone else’s responsibility – those who make the hardware will say it’s the experience developer’s responsibility, they’ll say it’s the user’s responsibility, and so on. People used to thinking only about data security and network security must now consider users’ physical safety.