This October, French researchers revealed a mind-controlled exoskeleton that let a man, who developed quadriplegia after an accident, walk again, for the first time in four years. With two surgically implanted devices, comprising 64 electrodes, brainwaves were translated by a computer into instructions that controlled the exoskeleton. Just thinking about walking allowed the patient to move his legs in a physical environment.
While neurotechnology is in its infancy (this involved 65 kg of robotic equipment and a safety harness tethered to the ceiling), it’s a milestone that can bring hope to people of restricted mobility everywhere. But, like every new technology, there’s also no denying it has some more safety implications to improve.
In theory, these developments have profound implications in the field of transhumanism, too; one day people could become more machine than person to enhance human abilities. An exciting proposition, perhaps, for fans of cyberpunk fiction. But let’s not forget that every digital device is a potential target for hackers.
From proof of concept to widespread usability and hackability
As promising as this new tech is for helping patients with severe spinal injuries, it’s important to remember that it’s still a long way from becoming clinically viable. But it does show where we’re heading, and it’s not going to be exclusively about revolutionizing medical technology. With exoskeletons controlled by neural implants, the next goal is to improve the reaction time to make the exoskeleton more responsive and self-balancing. Other factors, such as reducing weight and maximizing the lifespan of the electrodes without risking tissue damage, are areas of active research.
It might be decades before such technology ends up in the wild, but that day is coming. Neural implants no longer exist solely in the realms of science fiction – they’re now a reality. Once this technology moves out of the lab, there’ll be a possibility of it ending up in the wrong hands – both hackers and corrupt governments and their military. Steps will need to be taken to safeguard and implement security and privacy by design. And, like all digitally connected devices, reduce the chances of brain implants getting hacked – a more immediate “real-world” concern.
As soon as brain-computer interfaces become widely available, hackers and unscrupulous private companies alike will no doubt be quick to jump on the bandwagon. Invasive methods aren’t all we have to worry about either; the more immediate risk comes from non-invasive electroencephalograms (EEG), which are attached externally to the scalp. And it’s coming sooner than you think.
Would you let companies read your thoughts?
Few would answer “yes” to such a question, but private companies are already developing the means to let people control their services with their minds alone. Facebook, which hardly has an excellent track record when it comes to privacy, is one of them. Recent comments by Facebook CEO Mark Zuckerberg at Harvard University revealed the company’s aims to develop technology that would allow people to use their services with their minds alone – no navigating menus or even typing. This begs the question: would you let a company have access to your state of mind so they can deliver more relevant ads, if, in exchange, you could order a pizza just by thinking about your favorite toppings?
While that might sound far-fetched, it’s important to remember that digital data, rather than oil or precious metals, is now the most sought-after commodity on the planet. And if we translate the capacity of the human mind into digital data, estimates are as high as 2.5 petabytes, which is equivalent to three-million hours of streaming TV. Other estimates are somewhat lower: if the state-run media of North Korea can be believed, South Korea’s president only has a paltry 2 MB of storage.
Though claims range from woefully absurd to impractically abstract, Zuckerberg also reminded us that the fastest way we can currently get information out into the world is by speech. That’s equivalent in bandwidth to a 14.4k modem. With a brain-to-computer interface, it’s possible to transmit data almost as fast as we can think about it – or about five times faster than speech. If you’re starving, you could get that pizza delivered a whole minute faster.
Will cybercriminals ever be able to hack into your mind?
It’s a common trope of science fiction and one that’s been explored in disturbingly realistic TV shows like Black Mirror and Hacking the Brain with Morgan Freeman. And in the real tech world, we’re edging ever closer to a world where technology-enabled mind-reading and brain modification could become a reality.
Where there’s data, there are people who want to use it, and where there’s connectivity, there are the means to harness it. While legitimate organizations may face heavy restrictions over how they implement and maintain neural interfaces thanks to legislation like GDPR, cybercrime follows no such standards. And to make matters worse, technological evolution has a habit of outpacing our ability to mitigate the risks. If, or perhaps when, cybercriminals get their hands on such tech, they might be able to steal or implant thoughts., and hold people to ransom, or take social engineering scams to a whole new level.
What can we do to assume control over our thoughts?
Although reading people’s thoughts to deliver ultra-targeted advertisements or turning them into mind-controlled zombies might sound absurd, the implications of neurotechnology are real enough to have sparked the attention of regulators and human rights groups. In April 2017, neuro-ethicist Marcello Ienca of the University of Basel and human rights lawyer Roberto Adorno of the University of Zurich put together a study exploring ways to mitigate the risks. In the UK, the Royal Society recently urged the government to launch an investigation into neural interface technology.
We can only hope that ethics and the law catch up before it’s too late. It’s never been more critical for people and businesses to think carefully about how they use new technology and ensure that privacy and security are always implemented, by design and default.
Article reflects the opinions of the author. Published in 2019.