Though there is much debate on
whether or not it would be in the United State’s best interests to make drones
fully autonomous, it remains clear that doing so would present several
advantages and disadvantages. Rather than have drones that are fully autonomous
or fully operated by humans, I agree with P.W. Singer that the best option will
be a partnership between somewhat autonomous drones and human operators.
This
evening I had the privilege of attending a lecture titled “Warfare in the 21st
Century”, and interestingly enough the lecturer turned out to be P.W. Singer
himself. Though he only talked about autonomous drones for a brief period of
time, he stressed that in the future rather than view these machines as being
independent from humans, there would be more of a “quarterback to wide receiver”
type of relationship. I believe this system makes the most sense, because humans
can utilize many of the advantages that a “thinking” machine has to offer
(vision, accuracy, quick response time, etc.), but they will also be watching
over the machine in order to provide a human element and prevent potential
misdiagnoses that a robot may not detect. Eventually, a certain trust will
develop and the human operator will be able to predict what the drone is programmed
to do before it actually does it. Conversely, the drone will also be able to
interpret the operator’s actions and respond appropriately.
This
form of supported autonomy addresses one of the key issues that many have with independent
robots and that is the inherent question of how humans will be held responsible
for the actions of autonomous drones. If an autonomous robot is about to kill
an innocent civilian, the drone operators will be responsible for the decision
of shutting down their associated drone. The development and implementation of
this fail-safe mechanism is critical to public safety. With this system intact,
the government will successfully be utilizing the positives of autonomous
robots, and eliminating the majority of negatives that come along with it. The upcoming presence of autonomous drones seems inevitable, but it is our responsibility to make such a future as safe as
possible.
Can this system lead to mistakes like the Aegis disaster where the operator begins to trust the drone more than his own judgment?
ReplyDeleteDo you really believe this will be a fail-safe system? The operator may begin to trust the robot, but the robot is not a live thing. It will never trust in the human, but will act on its own accord unless it is physically shut down or made to do something else. Therefore, the human operator could become controlled by the robot.
ReplyDeleteIn order for this system to work, these robots will need to be designed so that the operator is able to predict the robots next move virtually 99.9 percent of the time.
ReplyDeleteIf these robots were able to predict certain situations, who is to say that enemies can't program their machines to one-up us? If technology has the potential to keep expanding, these types of questions go into this discussion.
ReplyDelete