Using Robots Ethically

In 1942, science fiction writer Isaac Asimov proposed his three laws of robotics. These were to guide robot-human interactions.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today researchers at the University of Hertfordshire in the UK have developed a concept called Empowerment. It is to help robots protect and serve humans, while keeping the robots safe as well.

The concept of acting ethically is difficult to dilute to a computer program which will guide the behavior of a robot. Public opinion swings from enthusiasm about progress in AI to outright fear that the robots will take over the world one day. It is therefore important to have a clear understanding of what a robot should and should not be able to do.

Scientist Christoph Salge said, Empowerment means being in a state where you have the greatest potential influence on the world you can perceive. So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.

Comments are closed.

daycares.cohttp://www.walmart.com/ip/Beckham-Hotel-Collection-Pillow-2-Pack-Luxury-Plush-Pillow-Dust-Mite-Resistant-Hypoallergenic-Queen/832325636