This article is part of our Autonomous Weapons Challenges series.
International discussions about autonomous weapons systems (AWS) often focus on a fundamental question: Is it legal for a machine to make the decision to take a human life? But woven into this question is another fundamental issue: Can an automated weapons system be trusted to do what it’s expected to do?
If the technical challenges of developing and using AWS can’t be addressed, then the answer to both questions is likely “no.”
AI challenges are magnified when applied to weapons
Many of the known issues with AI and machine learning become even more problematic when associated with weapons. For example, AI systems could help process data from images far faster than human analysts can, and the majority of the results would be accurate. But the algorithms used for this functionality are known to introduce or exacerbate issues of bias and discrimination, targeting certain demographics more than others. Given that, is it reasonable to use image-recognition software to help humans identify potential targets?
But concerns about the technical abilities of AWS extend beyond object recognition and algorithmic bias. Autonomy in weapons systems requires a slew of technologies, including sensors, communications, and onboard computing power, each of which poses its own challenges for developers. These components are often designed and programmed by different organizations, and it can be hard to predict how the components will function together within the system, as well as how they’ll react to a variety of real-world situations and adversaries.
Testing for assurance and risk
It’s also not at all clear how militaries can test these systems to ensure the AWS will do what’s expected and comply with International Humanitarian Law. And yet militaries typically want weapons to be tested and proven to act consistently, legally, and without harming their own soldiers before the systems are deployed. If commanders don’t trust a weapons system, they likely won’t use it. But standardized testing is especially complicated for an AI program that can learn from its interactions in the field–in fact, such standardized testing for AWS simply doesn’t exist.
We know how software updates can alter how a system behaves and may introduce bugs that cause a system to behave erratically. But an automated weapons system powered by AI may also update its behavior based on real-world experience, and changes to the AWS behavior could be much harder for users to track. New information that the system accesses in the field could even trigger it to start to shift away from its original goals.
Similarly, cyber attacks and adversarial attacks pose a known threat, which developers try to guard against. But if an attack is successful, what would testing look like to identify that the system has been hacked, and how would a user know to implement such tests?
Physical challenges of autonomous weapons
Though recent advancements in artificial intelligence have led to greater concern about the use of AWS, the technical challenges of autonomy in weapons systems extends beyond AI. Physical challenges already exist for conventional weapons and for non-weaponized autonomous systems, but these same problems are further exacerbated and complicated in AWS.
For example, many autonomous systems are getting smaller, even as their computational needs grow, including navigation, data acquisition and analysis, and decision making—and potentially all while out of communication with commanders. Can the automated weapons system maintain the necessary and legal functionality throughout the mission, even if communication is lost? How is data protected if the system falls into enemy hands?
Issues similar to these may also arise with other autonomous systems, but the consequences of failure are magnified with AWS, and extra features will likely be necessary to ensure that, for example, a weaponized autonomous vehicle in the battlefield doesn’t violate International Humanitarian Law or mistake a friendly vehicle for an enemy target. Because these problems are so new, weapons developers and lawmakers will need to work with and learn from experts in the robotics space to be able to solve the technical challenges and create useful policy.
There are many technical advances that will contribute to various types of weapons systems. Some will prove far more difficult to develop than expected, while others will likely be developed faster. That means AWS development won’t be a leap from conventional weapons systems to full autonomy, but will instead make incremental steps as new autonomous capabilities are developed. This could lead to a slippery slope where it’s unclear if a line has been crossed from acceptable use of technology to unacceptable. Perhaps the solution is to look at specific robotic and autonomous technologies as they’re developed and ask ourselves whether society would want a weapons system with this capability, or if action should be taken to prevent that from happening.
Reference: https://ift.tt/uxbz92X
No comments:
Post a Comment