Results 1 to 2 of 2

Thread: Making robots that are ethical - perhaps of interest

  1. #1

    Making robots that are ethical - perhaps of interest

    The ethical robot (w/ Video)
    November 9th, 2010 in Electronics / Robotics

    (PhysOrg.com) -- Philosopher Susan Anderson is teaching machines how to behave ethically.

    Professor emerita Susan Anderson and her research partner, husband Michael Anderson of the University of Hartford, a University of Connecticut alumnus, at first seem to have little in common when it comes to their academic lives: she's a philosopher, he’s a computer scientist.

    But these two seemingly opposite fields have come together in the Andersons’ collaborative work, in which the team works in a new field of research, called machine ethics, that’s only about 10 years old.

    “There are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots,” says Susan, professor emerita of philosophy in the College of Liberal Arts and Sciences, who taught at UConn’s Stamford campus. “Don’t we want to make sure they behave ethically?”

    The field of machine ethics combines artificial intelligence techniques with ethical theory, a branch of philosophy, to determine how to program machines to behave in an ethical manner. But there is currently no agreement, says Susan, as to which ethical principles should be programmed into machines.

    In 1930, Scottish philosopher David Ross introduced a new approach to ethics, she says, called the prima facie duty approach, in which a person must balance many different obligations when deciding how to act in a moral way – obligations like being just, doing good, not causing harm, keeping one’s promises, and showing gratitude.

    However, this approach was never developed far enough to instruct people how to weigh these different obligations with a satisfactory decision principle: one that would instruct them on how to behave when several of the prima facie duties pull in different directions.

    “There isn’t a decision principle within this theory, so it wasn’t widely adopted,” says Susan.

    That’s where the Andersons come in. By using information about specific ethical dilemmas supplied to them by ethicists, computers can effectively “learn” ethical principles in a process called machine learning.

    The toddler-sized robot they have been using in their research, called Nao, has been programmed with an ethical principle that was discovered by a computer. This learned principle allows their robot to determine how often to remind people to take their medicine and when to notify an overseer, such as a doctor, when they don’t comply.

    Reminding someone to take their medicine may seem relatively trivial, but the field of biomedical ethics has grown in relevance and importance since the 1960s. And robots are currently being designed to assist the elderly, so the Andersons’ research has very practical implications.

    Susan points out that there are several prima facie duties the robot must weigh in their scenario: enabling the patient to receive potential benefits from taking the medicine, preventing harm to the patient that might result from not taking the medication, and respecting the person’s right of autonomy. These prima facie duties must be correctly balanced to help the robot decide when to remind the patient to take medication and whether to leave the person alone or to inform a caregiver, such as a doctor, if the person has refused to take the medicine.

    Michael says that although their research is in its early stages, it’s important to think about ethics alongside developing artificial intelligence. Above all, he and Susan want to refute the science fiction portrayal of robots harming human beings.

    “We should think about the things that robots could do for us if they had ethics inside them,” Michael says. “We’d allow them to do more things for us, and we’d trust them more.”

    The Andersons organized the first international conference on machine ethics in 2005, and they have a book on machine ethics being published by Cambridge University Press. In the future, they envision computers continuing to engage in machine learning of ethics through dialogues with ethicists concerning real ethical dilemmas that machines might face in particular environments.

    “Machines would effectively learn the ethically relevant features, prima facie duties, and ultimately the decision principles that should govern their behavior in those domains,” says Susan.

    Although this is a vision of the future of machine ethics research, Susan thinks that artificial intelligence has already changed her chosen field in major ways.

    She thinks that working in machine ethics, which forces philosophers who are used to thinking abstractly to be more precise in applying ethics to specific, real-life cases, might actually advance the study of ethics.

    And she believes that robots could be good for humanity: she believes that interacting with robots that have been programmed to behave ethically could even inspire humans to behave more ethically.

    Provided by University of Connecticut

    "The ethical robot (w/ Video)." November 9th, 2010. http://www.physorg.com/news/2010-11-...bot-video.html

  2. #2
    Super Moderator Petr Schreiber's Avatar
    Join Date
    Aug 2005
    Location
    Brno - Czech Republic
    Posts
    7,129
    Rep Power
    732
    Thanks Lance,

    it seems the Nao robot platform is extremely popular, I'v already seen them playing soccer, and now this

    I think by the end of the century, robots will/could become common companion to humans in many areas. I hope I will live long enough to experience it and also to help to push this idea further.

    The ethics of the robots are very important, so are the ethics of their creators. Even now, lot of research is put into military application of robots. In my opinion it is major mistake - without wanting to sound too dramatic, I think people already made a mistake in the past by starting wars, I think we should not teach robots to do the same.


    Petr
    Last edited by Petr Schreiber; 10-11-2010 at 21:34.
    Learn 3D graphics with ThinBASIC, learn TBGL!
    Windows 10 64bit - Intel Core i5-3350P @ 3.1GHz - 16 GB RAM - NVIDIA GeForce GTX 1050 Ti 4GB

Similar Threads

  1. Evidence for Pre-cognition? Perhaps of interest
    By LanceGary in forum Shout Box Area
    Replies: 6
    Last Post: 22-11-2010, 08:45
  2. Robots, War, and the future
    By LanceGary in forum Shout Box Area
    Replies: 3
    Last Post: 29-08-2010, 10:57

Members who have read this thread: 0

There are no members to list at the moment.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •