In today’s digital world, nothing you do and nothing you say is private. Not only do the walls have ears but they are also connected to the internet. There is only one space in the world that is truly private to you and that is your mind. But even that won’t be true for long.
Elon Musk’s Neuralink may seem like it borders on science fiction. But the day is not far when there will be a machine that can read and maybe even alter your mind. Some advocates for Neurorights, or human rights specifically aimed at protecting the brain, want to put in place regulations before this becomes a reality.
Jack Gallant, a cognitive scientist at UC Berkeley, and other researchers published a paper elaborating on a rudimentary way of “reading minds.” Volunteers in a study were asked to watch hours of video clips while their heads were inside an MRI machine. The researchers then trained a neural network on a dataset that linked recorded brain activity to each corresponding frame of video. After that, the researchers asked the volunteers to watch new videos while still recording MRI data. They then fed the data into the AI model that they trained earlier. The model was able to generate a very vague but identifiable reconstruction of some of the imagery that the volunteers watched. The paper, by the way, was published in 2011.
In 2021, Chile’s senate approved a bill, the first of its kind in the world, to amend the constitution to protect “neuro rights” or brain rights. This made Chile the first country in the world to enshrine neurorights in its constitution. But did the South American country jump in prematurely?
Guido Girardi, a former Chilean senator who played an important role in the legislation, compared neurotechnology to something else legislators might have been a little late to respond to—social media. Chile did not want to be late again. Neurotechnology, when it proliferates more widely, might have bigger implications for society than social media. The argument here is that perhaps it’s prudent to get ahead of the technology for once.
But going early can also have its downsides. Especially when we are not quite sure about what the technology will be capable of doing in the future.
“It’s quite tricky to regulate now and the reason for that is, it’s not entirely clear what the most widely used applications will be. On one hand, you can’t wait too long because then the technology takes off too fast. There will be problems and no one will have thought about them and it will be too late. On the other hand, going too early might create problems of its own,” Allan McCay, a prominent neurorights advocate, told indianexpress.com. McCay is the Deputy Director of The Sydney Institute of Criminology and an Academic Fellow at the University of Sydney’s Law School.
According to McCay, legal systems across the world have to strike a delicate balance. They should not “let the horse bolt” and shut the barn door after. But on the flip side, they should not regulate it so strictly that they ruin chances for the technology to do good. And neurotechnology has a lot of potential to do good.
From paralysis to possibilities and back again
Ian Burkhart suffered a spinal cord injury when he was 19, which left him a quadriplegic, unable to move his legs or arm. In 2014, he signed up for a pioneering trial where he tested a brain-computer interface designed to control muscle stimulation. He got an implant in his brain that transmitted movement signals to a sleeve of electrodes worn on his arm. This meant that he could move his fingers just by thinking about it.
But he eventually had to get the device removed in 2021, long after the trial had ended. “When I first had my spinal cord injury, everyone said, You’re never going to be able to move anything from your shoulders down again,’” he says. “I was able to restore that function, and then lose it again. That was tough,” MIT Technology Review quoted Burkhart.
Therapeutic experiments are just one of the potentially positive uses of brain-computer interfaces and other neurotechnology. Many companies are working on technology that can help treat a variety of conditions from paralysis to therapy. California-based Neuropace, for example, has an FDA-approved epilepsy device. The “RNS device” detects unusual electrical activity in the brain and stimulates the brain with electronic pulses to stop the epileptic fit from happening.
The danger to protect against
Neurotechnology is promising and the possibilities it presents are truly astounding. Coming down too hard on it and risking taking away the wonders of this technology from people like Burkhart would be cruel. However, the technology poses some human rights challenges.
McCay is particularly concerned with the potential misuse of technology in the criminal justice system. For example, remember the device that can treat epilepsy? Imagine if someone used similar neurotechnology to develop a device that aimed to stop convicted criminals from committing crimes by “predicting criminal behaviour” and applying some sort of stimulation to the brain.
The possibility of someone developing such dystopian technology exists outside of Black Mirror episodes. Massachusetts-based company Brainwave Science already advertises that its “iCognative” product can “uncover hidden information in the mind of a suspect.” This is a passage from their website: “This cutting-edge technology can reveal an individual’s plans and intentions, as well as any past actions related to national security, criminal activities such as fraud, and theft in businesses, providing an investigative and intelligence-gathering edge like never before.”
Also, many parts of the world already use technologies like electronic ankle bracelets to restrict the mobility of prisoners. Once the technology is available, it is not a distant leap to imagine that criminal justice systems across the world will try to monitor the minds of convicted criminals using neurotechnology.
But neurotechnology will not stay within the bounds of therapeutic use (and possible criminal justice use) for long. It is almost an eventuality that brain-computer interfaces will become mass-market devices. Elon Musk has admitted in the past that the objective of Neuralink is to “merge humans with AI.”
In essence, not long from now, a technology that can read and maybe even alter your thoughts could find its way into therapeutic medicine, the criminal justice system, and the world at large.
Privacy and transparency
It is still hard to predict exactly what the neurotechnology of the future will look like but many have an idea about what neurorights should be based on. McCay believes that neurorights should ensure both privacy and transparency—privacy for the user and transparency about how the technology works.
“Neurotech is increasingly a subset of AI, almost. Sort of like humans merging with AI. There have already been extensive discussions of AI ethics and questions about the opaqueness of black box systems and how things like bias can creep in,” explained McCay.
A study published in October by Stanford HAI (human-centered artificial intelligence) found that foundation models like those built by OpenAI, Google, Meta, and others, are becoming less and less transparent. This lack of transparency is not something new in the tech industry. From opaque content moderation systems on social media platforms to deceptive ads to unclear wage practices in aggregator apps, transparency issues have been a mainstay of technology companies for a long time. And soon, some new technology companies will be able to read your brain and maybe even control it.
Neurorights in India
The concerns are real. The future could be terrifying. But for now, there is no need for additional legislation to control or prevent some of the dangerous scenarios that we touched upon. Some existing legal provisions in India already protect citizens against some neurotech dangers, technically.
“After the Supreme Court’s decisions in Puttaswamy (2017) and Selvi (2010), it is fair to say that a right to privacy in one’s thoughts is protected under the Indian Constitution. In the Selvi case, the court specifically held that a forcible intrusion into a person’s mental processes is an affront to liberty. It held that narco analysis, polygraph examination, and similar techniques could not be administered forcibly as this would violate persons’ right to privacy, and specifically in the criminal law context, their right against self-incrimination,” technology lawyer Jaideep Reddy wrote in an email interview to indianexpress.com.
According to Reddy, who focuses on the interaction of law and disruptive technologies, the Digital Personal Data Protection Act of 2023, could also play an important role in helping manage such technology once it comes into force. “Under this law, personal data generally can be collected or processed only based on consent or other specifically permitted legitimate uses. While the State is given fairly wide leeway under this law, any neural interference by private parties would be subject to more safeguards,” added Reddy.
Perhaps existing legislation and the country’s common law system can protect Indian citizens from the dangers posed by neurotechnology. But even if that is the case, what is important is that the stakeholders; citizens, regulators, and legislators, need to have a conversation about the technology and whether our laws are enough to govern it.