Histories of Violence: Why We Need a Universal Declaration for Neurorights

By Brad EvansSeptember 5, 2022

Histories of Violence: Why We Need a Universal Declaration for Neurorights
THIS IS THE 58th in a series of dialogues with artists, writers, and critical thinkers on the question of violence. This conversation is with Rafael Yuste, a professor of biological sciences and neuroscience at Columbia University. He is one of the founding members of the NeuroRights Foundation.

¤


BRAD EVANS: I would like to begin this conversation by asking about a recent claim you made concerning how we are within a decade of producing technologies that will be able to read a person’s mind. Can you talk more about the scientific basis for this assertion, which for many would seem to belong to the realm of science fiction?

RAFAEL YUSTE: With laboratory animals, neuroscientists have already been able to decode brain activity to different degrees and even manipulate this activity. We have shown it is possible to take control over the functioning of the brain, which is a process that’s been in operation for more than a decade now. And we see it becoming an increasingly powerful possibility, especially as the technologies to intervene are becoming more advanced. One example that draws upon the work we do in our laboratory uses optical neurotechnology, which allows us to study vision in mice; by monitoring the activity of neurons in the visual cortex we can actually decode what the animal is looking at, and by activating those neurons we can take it a stage further and make the animal see what they are not seeing.

This is just an example of what you can do with animals, which can have a profound impact on what they perceive and how they respond to manufactured stimuli. Decoding brain activity in this way could lead to the controlling of behavior, by altering perception and recoding the images in the mind. What we can already do with mice today, we will no doubt be able to do with humans tomorrow. It’s unavoidable. So, what matters now is how we learn to regulate it and ensure it is not abused in any way.

My colleagues at Stanford recently showed how it is now possible to decode speech in paralyzed patients up to some 20 words per minute with 95 percent accuracy, just by recording neuroactivity with implanted devices. In terms of stimulating and controlling brain activity, there is also a longer history of interventions for medical conditions such as Parkinson’s disease and depression that reveals the ways the brain is receptive to these technologies. While these procedures haven’t set out to control behavior, studies have shown that in some cases they can cause alienation and an alteration in the identity of the patients.

I appreciate here that you are not raising your concerns with the active manipulation of the mind as a critical thinker but as a practicing neuroscientist. When did the alarm bells first ring for you concerning this issue?

In the past I wasn’t at all working in ethics, but my passion was really to try and understand how the brain works. This was driven by a commitment that I still have, to do the very best for my patients. Through this I helped launch the US BRAIN Initiative back in 2013 during the Obama administration. It was during the conversations we had at the early stages of this initiative that I started to become more and more concerned with the commercial applications of neurotechnologies, which I realized could easily veer away from medical applications and be adapted for wider social usage. This prospect increasingly worried me as a scientist who was committed to exploring fully their medical capabilities. There seemed to me to be a fundamental disconnect between using neurotechnologies to save a person’s life, as opposed to their wider rollout for profits and everyday consideration.

So, my first response was the join the ethical council of the US BRAIN Initiative, to really try and push forward a discussion on an issue that nobody was really talking about. Following this, I helped organize a grassroots meeting held at Columbia University in 2017, which featured 25 experts from all over the world including neuroscience, neurosurgery, medical ethics, AI, and other areas, and representing different international brain initiatives. The ambition was to map out the potential ethical and societal challenges caused by neurotechnology and, more importantly, consider what we could do as a collective response. Through our discussions we concluded that what we were dealing with wasn’t an issue simply for medical ethics; it was a human rights problem that demanded an urgent solution. The brain is not just another organ. It is the organ that generates all our mental and cognitive abilities. After all, is there a more defining right than the rights we have to our mind and intellectual faculties?

What you are proposing would seem to be rather self-evident in terms of protecting the integrity of the mind, and yet it also proposes a remarkable shift in how we might also conceptualize violence. We still often think about human rights violations as representing an assault on the body, and while there has been a notable shift in recent times to recognize the lasting psychological effects of violence and trauma, our analysis still often remains wedded to bodily concerns. What you seem to be suggesting is a shift in the very order of importance for rights, which I understand you have been trying to codify with the NeuroRights Foundation?

The mind is the essence of what makes us human. So, if this is not a human rights issue, then what is a human rights issue? The NeuroRights Foundation began precisely from this ethical standpoint. This led us to identify the five neurorights we felt needed legal protections, which included the rights to mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias. What brings all of them together is the necessity to see the integrity of the mind as central to any human rights deliberations, which led to the publication of an article in the journal Nature that dealt with the ethical priorities faced as we started to come to terms with neurotechnology and AI, along with instigating more advocacy work with international organizations and countries.

Working with Jared Genser who is a leading human rights lawyer, and his team, we have been putting our energies into raising awareness about the issue of neurorights and pushing for legislation that can mitigate against potential abuses before they have the chance to arrive, at which point it may be too late to act with any effectiveness. A notable success has been in Chile, which in 2021 became the first country to officially ratify neuroprotections, by approving a constitutional amendment, with unanimous support, that protects brain activity and the information extracted from it.

I should stress that these concerns have been a direct response to the development of the technologies, which have in turn inspired a number of conversations in which we have sought to bring together human rights law with the ethical ideas as founded in the Hippocratic Oath. It required those of us working in the medical or biomedical professions to be alert to the potential dangers and reach out to those working in the legal fields. It became very obvious to us that we were not going to resolve this by relying upon medical ethics alone. It needed to be framed more as a policy issue, which required speaking with those who had vast experience of working to mitigate against some of the worst human rights abuses. Jared, for example, worked with victims of torture and tries to help free persons who are being politically detained by authoritarian regimes. In the beginning, you don’t think of such expertise when you first encounter neurotechnologies, but you eventually realize the evident crossovers. That was a real breakthrough for me, learning to see how we could both gain in terms of our responses.

I should also add: this is not just about neurotechnologies. We have all these other disruptive technologies being introduced, which have the potential to dramatically alter the human condition. The Metaverse, robotics, gaming, surveillance technologies, the internet, genetic engineering, and artificial intelligence — these will all require us to think more rigorously about their impacts and what it means to be human. Surely if what it means to be human changes, then so must our conception of the rights humans should be entitled to? Such issues cannot be left to soft-law regulations as we see them in the culture of medical ethics alone. We need to develop and incorporate legal provisions that can protect humans of the future.

I am very taken here by this idea of the broadening of the framework for human rights to account for what William Connolly once termed the “neuropolitical.” Mindful of this, I would like to press you more on the problem of violence. We often think of violence in very invasive ways; yet what also seems to concern us is the anticipated arrival of noninvasive technologies that might also demand a rethinking of what violation actually means?

Our angle is exactly to look at the abusive use of noninvasive technologies as representing a new kind of psychological violence. From World War II onward, our conception of human rights, as you mentioned, has been tied to concerns with violence and mobilized to physically protect people from harm. We are entering a world, however, where technologies no longer simply threaten our bodies. They are directly affecting our minds. But maybe we shouldn’t look at this negatively. Just as the human rights framework has provisions that positively help improve the human condition, so we might also see the protection of neurorights as part of an attempt to expand the freedom of thought. So, it’s not only about violence, but also reveals something deeper about the essence of the human.

We really try to deal with this through our commitment to mental privacy that should be guaranteed for all. If mental privacy was to be recognized as a human right, for example, by appealing more broadly to the legislation concerning psychological violence and torture, similar to those horrific cases we hear about where people suffer unimaginable abuse in the attempts to extract information from the brain, we would already be prepared to face up to the challenge of unwarranted mental interventions. Consent would be needed before the integrity of the mind was brought into question.

Despite what you are saying and my full agreement, it strikes me that a major challenge we are going to need to confront with the commercial application of such technologies (as seen in the ambitions of companies such as Neuralink) is the seduction and appeal they will have in creating so-called better augmented humans?

This is a major issue. Let’s track back here. We can potentially use neurotechnologies to read, write, code, and manipulate the activity of the mind. This is a formidable power. But we should remember, at this stage, neurotechnologies for reading are far more advanced than those which might manipulate. But if we can set the frameworks in place now to protect mental privacy, then what will be possible in the coming decades will already be partially legislated against. The problem, however, is that the conversation and legislation regarding social deployments is regulated by the field of consumer electronics, which are not focused on human rights. This becomes a more pressing concern when we start to properly consider the potential for mental augmentation.

The moment this ability arrives, it will be inevitable and its consequences unavoidable. We will all be required to augment, or else we face the prospect of creating some dual kind of citizenship based on new mental aptitudes. What kinds of discriminations might this produce? Will we end up producing a new hybrid species? Could this create new fractures and divisions that further exacerbate existing inequalities? And what will it mean for us personally and for the societies in which we live? Are we really prepared to wash away an intellectual system that’s upheld civilizations for a few millennia? Now is the time to have these conversations. We have to know in advance what we are giving ourselves over to and think very clearly about what to do before commercial augmentation happens. And this is a conversation that urgently needs to be taken out of the realm of consumer electronics.

We are already starting to learn painful lessons about what happens when we rush to introduce technologies without properly considering their full ethical implications. Social media is perhaps the most obvious example here. How many of its applications would properly pass considered and rigorous ethical scrutiny, given what we now know about their impacts on mental health, addiction, and wider social harms, which are affecting billions of people around the world? There are stark warnings to be learned about following the technology-first-ethical-considerations-later approach for sure.

As a political theorist, whenever somebody talks about the wonders of any new technology, my instinct is to immediately ask about their political and military applications. Do these also factor into your immediate concerns?

Given the forces at work, the uses you mention are invariably a pressing and deeply troubling concern. It’s not been our primary focus as we have sought to begin by having a more civil public conversation, but it is something that is really worrisome in terms of the longer-term possibility for a new neurotechnological arms race. Let me just give you a couple of examples of the weaponization of neurotechnologies that have already been considered by military leaders and strategists. First, in the United States, is the N3 project launched by DARPA to develop wireless and noninvasive BCIs (Brain-Computer Interfaces) for soldiers to be better equipped on the battlefield. This is about improving performance and the maneuverability of intelligently designed weapons systems. Secondly, in China, there is an extensive state-sponsored brain initiative, which is supposedly three times larger than the one that was instigated in the United States, but, crucially, run by the military. Supposedly, its primary purpose is to merge neuroscience and AI to enhance the power of the security state.

Next to where I am speaking to you from right now at Columbia University is Pupin Hall, where the first atomic reactor was built, which started the Manhattan Project. The atomic bomb was planned in close proximity to where I am sat. I am reminded of this every single day as I go to work, mindful of the consequences. It is also worth remembering that the physicists who built the bomb were the first to advocate for guidance on the use of atomic energy, and because of their advocacy, the Atomic Energy Commission was established in Vienna by the UN. I work in this shadow, realizing technologies are neutral and can be used for good or bad.

Though, to be honest, at this stage I am more concerned with the big corporations and what it might mean to have economies modeled around the ambition of drawing brain data. It’s not just like eavesdropping on emails, or monitoring and even manipulating social media feeds and the rest. Mental privacy is the sanctuary of our minds. What we are dealing with are fundamental questions concerning our sense of self and how we live together in this world. Right now, I think we are seeing the first steps towards living in what we might call “the neurotechnological age,” which will place monumental responsibilities upon our shoulders.

To conclude, I would like to ask you to consider more the work you do in light of the horrors caused by the Manhattan Project. As a scientist, do you ever step back and think the risks to the work you do might outweigh the benefits?

I am an optimist and bullish about neurotechnology and its future, but, unfortunately, the world is not perfect. And of course, I wish that bad intentions did not exist, but we could confront potential negative scenarios for humanity. Mindful of this, we shouldn’t stop trying to develop tools that will push forward and help humanity because they could be used for alternative or nefarious uses. Instead, we need to be vigilant to those uses and legislate against them to the best of our abilities. As a medic, I am duty-bound to try to help people to the best of my professional ability. While it is a duty I take very seriously, I know in my flesh the real devastation caused by mental trauma and illness, so it’s something I care very passionately about in terms of finding solutions.

Almost every day I get an email from a patient, mother, or friend asking whether neurotechnology might help cure their disease or save the life of someone they love. And it’s heartbreaking to continually read them. These are people who urgently need our help. If anybody could make any kind of breakthrough in terms of how we might better understand the brain to assist those in need, I would encourage them to work 24/7 without hesitation. So, when I look at the Manhattan Project, I want to see what ethical lessons can be learned and how we can work in advance to preempt negative outcomes. We need to set up commissions prior to a neurotechnological disruption in society, for example, in terms of the mental privacy, identity, or agency of its citizens. We need to protect our minds from technology, something that previous generations did not have to worry about. It is precisely the way the technology could diminish the essence of our humanity that compels me to insist upon a universal framework for the protection of neurorights.

¤


Brad Evans is a political philosopher, critical theorist, and writer, who specializes on the problem of violence. He is the founder/director of the Histories of Violence project, which has a global user base covering 143 countries.

¤


Featured image: Rafael Yuste photo is licensed under CC-BY-SA 4.0.

LARB Contributor

Brad Evans is a political philosopher, critical theorist, and writer, who specializes on the problem of violence. He is author of over 17 books and edited volumes, including most recently Ecce Humanitas: Beholding the Pain of Humanity (2021) and Conversations on Violence: An Anthology (with Adrian Parr, 2021). He leads the Los Angeles Review of Books “Histories of Violence” section.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!