- Föreningen för hjärnans integritet i Sverige
University of Oxford:Practical Ethics: Non-lethal, yet dangerous: neuroactive agents
University of Oxford
An article and editorial in Nature warns about the militarization of agents that alter mental states. While traditional chemical weapons are intended to hurt or kill people, these agents are intended to disable. For example, they might induce confusion, sleepiness or calm. The Chemical Weapons Convention contain a loophole for using biochemical agents for law enforcement including domestic riot control, and there is a push from some quarters to amend it to allow novel incapacitating agents. Is disabling agents just an extension of other forms of non-lethal force, or is this a slippery path we should avoid?
There are two issues with incapacitating agents. The first is whether they would save lives if they were available, the second whether there is something fundamentally immoral about affecting the judgement of people.
A perfect disabling agent would be perfectly safe for anybody at any
dose, a tall order. Its availability should also not tempt law
enforcement into overuse or to use more violence than needed. Unfortunately past events suggest this may not happen: the most well-known incident with incapacitating agents was the October 2002 Moscow theatre siege, where a fentanyl derivative was used. While the gas incapacitated the terrorists, it also killed 124 of the hostages and caused panic. Futhermore, the special forces killed incapacitated terrorists rather than arrest them.
Non-lethal weapons have often been critiqued on these grounds. Existing weapons are commonly not used properly, making them dangerous. While the weapons are not intended to kill, they merely reduce the chance (hence the new term ‘less than lethal’ used by the industry). Yet the appearance that using the weapon will not kill the target lowers the threshold of use, since the consequences are not perceived as being dire. Hence the reduction of fatalities due to proper use of disabling weapons may be outweighed by more widespread use of them and fatalities stemming from the weapons themselves. In addition, impairing the reasoning of stressed people in an environment with weapons may trigger dangerous events.
Is there a fundamental difference between a disabling agent and something like tear gas? I believe there is. Tear gas impairs perception and produces an aversive experience. This will strongly motivate the victim to do certain things, more or less coercing them to avoid the gassed area and find treatment. A disabling agent impairs the process where decisions are done: all behavioural options are open, but the ability to choose the best one is impaired. In a sense it is not coercive, but by reducing decision-making it also reduces moral agency. Victims of tear gas, rubber bullets or active denial systems may of course panic, losing their decision-making capacity. However, this is not the primary intention of the weapons.
From a utilitarian perspective, the effects of disabling agents appear uncertain. At present they are probably too dangerous to be acceptable and future safer versions may have unacceptable effects on how law enforcement and military operations are conducted (in addition, criminal and terrorist uses may become possible). Many nonconsequentialist ethical perspectives do not take lightly to intentionally reducing moral agency of people: it is a direct infringement of autonomy and being a moral agent is in many systems seen as crucial for being a rightholder. Hence these agents are less acceptable than (already problematic) tear gas and stun rounds.
It can be inconvenient and dangerous to deal with opponents equipped
with full autonomy. But temporarily removing autonomy opens a big can
of worms in military ethics (what about the responsibility of war crimes committed under the influence?) Worse, disabling agents are useful tools for repression, since repression
relies on reducing the ability of the citizenry to act and think. Open
societies may occasionally need to control their citizens, but they
generally benefit from free action and thought: closed societies would
instead be tempted to use the agents widely. Developing such agents may hence play into the hands of repressive societies, who get another harmful tool of control.
We might not be able to always prevent misuses of neuroscience in the future, but we can make sure we have conventions that come down hard on them.
Print article | This entry was posted by Moderator on 2011/04/02 at 21:54, and is filed under Ethics & Bioethics, Surveillance - Integrity and Autonomy. Follow any responses to this post through RSS 2.0. You can leave a response or trackback from your own site. |