Moral Robot soldiers: A daring possibility

Should robot soldiers be given the ability to detect emotion?

Should robot solders have emotions of their own?

What is the major difference between human soldiers and robots?

            Posttraumatic stress disorder (PTSD)

            Humans have to live with what they’ve done

            Robots’ memories can be erased, won’t affect their judgment anyway

Can we even program robots to feel other peoples’ emotions?

            We often find it difficult to feel what others are feeling so how do we program robots to sense other peoples’ anger, fear or surrender and tell it apart from a fake if humans are at times wrong at their guesses.

9 Responses to “Moral Robot soldiers: A daring possibility”

  1. This can be a very interesting topic. The most important part of the argument will be to explain the potential significance of having or detecting emotion to the kinds of ethical decisions that a soldier might have to make in a war. It is OK to assume for the purposes of argument that various such technological alternatives are possible.

    If you wish to consider the possibility or impossibility of the technologies, this should ate least be considered separately from the above questions. You might also consider the possible risks when these technologies make mistakes. You should try to justify any claims upon actual research in the field, or some strong philosophical arguments (e.g. it is probably fine for the robots to merely simulate emotion, so arguing that they can’t “really” have emotions is irrelevant).

  2. danima Says:

    I’m writing on this topic as well, but from a slight political theory perspective. How do you think civilians would relate to robot soldiers? Turkle’s paper talked about this turtle that didn’t need to be “alive.” Do we need the robot soldiers to be self-aware for the purposes of warfare? If robot soldiers don’t really die, are they really living things?

  3. stevegal Says:

    The idea of robot soldiers is awesome. I think it is a great idea, because then there will not be real deaths, just the destruction of meaningless moving/intelligent object s that simulate humans. All our battles could be like “battle-bots”–However, the only problem would be if these robots think on their own and rebel. But can meaningless “tools” really be rebellious? Only if we program them to…but why would we do such a thing, unless we would want to get ourselves killed.

  4. skazafraz Says:

    But if robots had emotions it would be nothing like battle bots as real things would die in the process. If robots have the ability to erase their memories would it still be right to send them to war? What if a person blocks out their memory once they return from the front lines? It wouldn’t be right to send them back to war again. If robots had emotions what would be the difference between them and the human with chronic amnesia?

  5. aayoubi Says:

    Of course there will be real deaths. Autonomous robots might have a more horrific effect on human death than humans fighting eachother. Humans can not totally forget a memory. If you can forget about your memories teach me how. Why wouldn’t it be right to send them back onto field after their memories of a previous battle that they lost be erased? I didnt mean all their memories to be erased, just a memmory that could possibly alter their responsibilities and duties as soilders

  6. jdimatteo Says:

    I was intrigued by your question, “Can we even program robots to feel other peoples’ emotions?” You note that humans don’t even have fool-proof methods of correctly sensing another person’s emotions.

    I think it is important to consider this problem as a particular instance of a larger, more general problem in building robots and AI. There is an approach in AI that works towards “thinking rationally” as opposed to “acting humanly”. In the first case, logic is supposed to be used to use syllogisms to always generate correct conclusions from knowledge. But I don’t think a syllogistic approach could be effective for detecting emotions because the knowledge generated from the perception would never be certain and the rules probably wouldn’t always hold.

    However, if we take the “acting humanly” approach, it seems relatively easy. Just adapt a Turing Test to test detection of human emotion. If the robot passes off as a human, we can say that the robot effectively detects other people’s emotions.

  7. aayoubi Says:

    Nice idea witht the turing test to detect human emotion. I agree fully. The main thing that I argue is that if the robots can sense emotion, then they will fight just wars. however, in his paper “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture” Arkin said that in case of emergencies the autonomous robtos will be allowed to deviate from the Laws of War and Rules of Engagment programmed into them. If they are allowed to do that, then we’re right back where we started with unjust wars.

  8. Chris M. Ferguson Says:

    Just because we can erase a robot’s memories, doesn’t mean we should. If we create life, then we have a moral responsibility to take care of it. No matter how many bolts and cogs are in it’s arse.

  9. Bonni Rambatan Says:

    I think the main challenge in battle robots would be how their death would live up to the necessary spectacle of what hitherto been human death constitutive in wars — its obscene underside, its jouissance. So, emotions would have to play a large part to keep all the tragedies and melancholy intact, even all the traumas, etc. Otherwise, trying to fight wars with cold, emotionless robots would be like trying to solve the war with a sports tournament. Real deaths and traumatic memories would effectively have to be recreated if we are to fight wars with robots instead of humans.

Leave a reply to jdimatteo Cancel reply