This article is part of our Autonomous Weapons Challenges series.
The real world is anything but binary. It is fuzzy and indistinct, with lots of options and potential outcomes, full of complexity and nuance. Our societies create laws and cultural norms to provide and maintain some semblance of order, but such structures are often open to interpretation, and they shift and evolve over time.
This fuzziness can be challenging for any autonomous system navigating the uncertainty of a human world—such as Alexa reacting to the wrong conversations, or self-driving cars being stymied by white trucks and orange traffic cones. But not having clarity on right-or-wrong is especially problematic when considering autonomous weapons systems (AWS).
International Humanitarian Law (IHL) is the body of laws that govern international military conflicts, and they provide rules about how weapons should be used. The fundamentals of IHL were developed before the widespread use of personal computers, satellites, the Internet, and social media, and before private data became a commodity that could be accessed remotely and often without a person’s knowledge or consent. Many groups are concerned that the existing laws don’t cover the myriad issues that recent and emerging technologies have created, and the International Committee of the Red Cross, the watchdog of IHL, has recommended new, legally binding rules to cover AWS.
Ethical principles have been developed to help address gaps between changing cultural norms and technologies and established laws, but such principles also tend to be vague and difficult to translate into legal code. For example, even if everyone agrees on an ethical principle like minimizing bias in an autonomous system, how would that be programmed? Who would determine whether an algorithmic bias has been sufficiently “minimized” for the system to be deployed?
All countries involved in the autonomous weapons systems (AWS) debate at the United Nations have stated that AWS must follow international law. However, they don’t agree on what these laws and ethics mean in practice, and there’s additional disagreement over whether some AWS capabilities must be preemptively banned in order to ensure that IHL is honored.
IHL, Emerging Technology, and AWS
Much of the disagreement at the United Nations stems from the uncertainty surrounding the technology and how the technology will evolve in the future. Though existing weapons systems have some autonomous capabilities, and though there have been reports of AWS being used in Libya and questions about AWS being used in Ukraine, the extent to which AI and autonomy will change warfare remains unknown. Even when IHL mandates already exist, it’s unclear that AWS will be able to follow them: For example, can a machine be trained to reliably recognize when a combatant is injured or surrendering? Is it possible for a machine to learn the difference between a civilian and a combatant dressed as a civilian?
Cyber threats pose new risks to national security, and the ability of companies and governments to collect personal data is already a controversial legal and ethical issue. These risks are only exacerbated when paired with AWS, which could be biased, hacked, trained on bad data, or otherwise compromised as a result of weak regulations surrounding emerging technologies.
Moreover, for AI systems to work, they typically need to be trained on huge datasets. But military conflict and battlefields can be chaotic and unpredictable, and large, reliable datasets may not exist. AWS may also be subject to greater adversarial manipulation, which, essentially, involves tricking the system into misunderstanding the situation–something that can be as easy to do as placing a sticker on or near an object. Is it possible for AWS algorithms to receive sufficient training and supervision to ensure they won’t violate international laws, and who makes that decision?
AWS are complex, with various people and organizations involved at different stages of development, and communication between designers and users of the systems may not exist. Additionally, the algorithms and AI software used in AWS may not have originally been intended for military use, or they may have been intended for the military, but not for weapons specifically. To ensure the safety and reliability of AWS, new standards for testing, evaluation, verification, and validation are needed. And if an automated weapons system acts inappropriately or unexpectedly and causes unintended harm, will it be clear who is at fault?
Non-Military Use of AWS
While certain international laws cover human rights issues during a war, separate laws cover human rights issues in all other circumstances. Simply prohibiting a weapons system from being used during wartime does not guarantee that the system can’t be used outside of military combat. For example, tear gas has been classified as a chemical weapon and banned in warfare since 1925, but it remains legal for law enforcement to use for riot control.
If new international laws are developed to regulate the wartime use of AI and autonomy in weapons systems, human rights violations committed outside of the scope of a military action could—and likely would—still occur. The latter could include actions by private security companies, police, border control agencies, and non-state armed groups.
Ultimately, in order to ensure that laws, policy, and ethics are well adapted to the new technologies of AWS—and that AWS are designed to better abide by international laws and norms—policymakers need to have a stronger understanding of the technical capabilities and limitations of the weapons, and of how the weapons might be used.
Reference: https://ift.tt/2Rbi3Sz
No comments:
Post a Comment