No announcement yet.

A Glossary of Roboethics: A Crude Beginning

  • Filter
  • Time
  • Show
Clear All
new posts

  • A Glossary of Roboethics: A Crude Beginning

    Thank you to Rumman Chowdhury for suggesting someone put one together.

    A Glossary of Roboethics: A Crude Beginning

    A glossary of terms for aspiring AI ethicists is an excellent idea. This document will start out, more or less, as a glossary. However, because many of the terms are quite expansive or still poorly defined (e.g. superintelligence, embodiment, empathy), they cannot be readily distilled into a few sentences. Because the field is composed of several overlapping disciplines, a groundwork would have to be carefully laid by several experts with some understanding of each other’s work. An encyclopedia is premature, but an ordinary list of words and definitions will not suffice. EthicsNet, with its plethora of scenarios and quantified case studies for moral intelligences, gives concrete examples of how philosophical principles will materialize in the real world.

    An encyclopedia gives us an opportunity to introduce, expand upon, or alter ideas from other disciplines. As strange as it may be to talk about artificial ontogeny or robopsychology, it is quite possible these are bridges we will eventually have to cross. Cross-communication lets us deal broadly and rigorously with a topic while also speculating on possible trajectories the topic may take in near and distant future. This is a crude beginning sorely in need of academic references and many more details. Each entry is just a start.

    Autonomy: Autonomy is a recurring issue in ethics. For the management of virtual personal assistants, who will be privy to an individual’s correspondences, biometric data, whole genomes, emotional responses, musical tastes, purchases, internet search histories, private thoughts, it is clear that some fundamental code of when and how to act in light of new information will have to be developed in order for these programs to function and, more importantly, to function ethically. These principles are also applicable to any machine that has the capacity to influence human behavior or fundamentally alter human identity through synthetic biology, nanomedicine, and neurotech. Autonomy is an area where duty based ethics and utilitarianism often clash. It’s not clear where a caregiving bot, for instance, is merely doing its job (discouraging damaging or destructive habits) and where it is overstepping its boundaries.

    Deontology: Acting on principle instead of in a manner calculated to deliver optimum results. It is often characterized as a system of ethics concerned more with means than ends. Isaac Asimov’s robot stories often revolve around the situations that arise when one principle comes into conflict with another. Aside from these conundrums, there are times in which a single guiding precept can cause headaches. The classic example is telling the truth. While this is a rule we are taught as children, we eventually learn it is not without nuance. In order to spare people’s feelings, save ourselves from unwanted headaches (does it matter if we forgot to dust the antiques of our fussbudget employer because we were busy making sure they took their medication?), and to protect those around us (is it our place to report infidelity, or a similarly immoral but not illegal act?).

    Embodiment: Embodiment could be viewed as the number and degree to which anthropomorphic qualities have been giving to a synthetic being, the presence of something analogous to a nervous system, or having a physical form instead of existing in a purely virtual realm. Embodiments ties into the concepts of neuromorphic community and the phenomenological structuring of the way machines interpret perceptual input.

    International Law: Robots will be functioning across international boundaries and may be acting in nations that exist only within games, online communities, or in space - places outside the jurisdiction of any nation state. Understanding and juggling the mandates of different countries is perhaps a job best left to machines, but sticking to the letter of the law may collide with the machines deontological codes or with its consequentialist lines of reasoning. To what extent robots should be confined by the law and jurisprudence

    Moral Consequentialism: As the name implies the goal of every action is the end result. In other words, decisions that may create unpleasantness in the short run (such as separating an addict from their drug of choice) are viewed as moral based on their results. While a machine can be programmed to lead humanity to paradise, the road to paradise may be littered with atrocities no human could conceive of committing.

    Sapience: A sapient entity can perform a wide range of tasks competently, and so would qualify as an AGI. It is important to note, however, that this does not mean the entity in question has any human qualities, including basic emotions. A sapient being may behave in a perfectly acceptable manner and please those around it, yet it does not have anything analogous to the emotions elicited by an organic nervous system.

    Sentience: Broadly speaking sentience is equated with having emotions. More specifically it is having the subjective experience we call consciousness. Organisms capable of feeling complex emotions are sentient. These terms are important in roboethics because of their relationship to the topic of embodiment. Namely, how empathetic should machines be made? How far will this go to making them safe or compromising their predictability? Will it eventually be desirable or necessary?
    Last edited by Adam; 10-30-2018, 09:04 PM.