Social Robots Learning Summary
Robot rights? Towards a social-relational justification of moral
consideration.
Coeckelbergh, M. (2010)
Robots are seen as moral agents, which may harm humans and thus need morality. They should
either follow laws to prevent harm or moral machines should be build which have moral learning
emerging from their intelligence.
Coeckelbergh: rights approach is not flawed per se, but we should expand the range of arguments
for moral consideration in at least three ways: deontological, utilitarian and virtue.
Giving rights to an entity implies that it has worth and it needs to be treated such (direct argument
for moral consideration).
Kant said that anyone who violates rights does not take into account that others should also be
treated as ends. Hence in deontological ethics rights should be respected at all costs.
Utilitarians can give moral consideration to everything that is sentient, does not need to be human
(Singer). Speciesism: certain species are superior.
Arguments for consistency and emancipation rely on holding a particular ontological feature of the
entity in question (e.g. sentience), this is problematic because:
- Problems in application: thresholds are too high and feature can be irrelevant.
- Deontological and utilitarian account face problems with regard to their justification of
moral consideration. Because it is based on ontological features the problem of argument
from marginal cases occur: If properties are agreed upon as being sufficient for moral status
and not all humans share this, does that imply that some humans are not worthy of moral
concern.
The rights approach focuses on the individual rights and the utilitarian approach on the individual
interests. They make assumptions about the relationship between individual entities and the wholes
these entities are part of and these assumptions are problematic. By focusing on individualistic
features they tend to neglect the moral relevance of relations between entities of the wholes they
are part of.
Virtue ethics seems to avoid these problems because it uses indirect arguments for moral
consideration. The threshold is set by the virtues, but virtue ethics faces problems with application:
It is unclear to know what the virtues are, to what entities we should exercise them and what the
application consists in. A virtue ethicist will have to provide an ontology and will provide one which
seems to exclude all non-humans for moral consideration.
The alternative, a social relational argument for moral consideration attempts to avoid the
skepticism by replacing the requirement that we have certain knowledge about real ontological
features by the relation some entities have to us. Moral consideration is seen as something extrinsic,
it is attributed to entities within social relations and social context. Moral consideration should be
seen as subject to change. Both the robot and the human are seen as relational entities whose
identity depends on their relations with other entities. When we live with artificially intelligent
robots, we do not remain the same individuals and humans as we were before.
, Views on society of the:
- Rights approach: individuals prior to the social, which comes only into being by agreement.
- Utilitarianism: Society must safeguard and increase overall utility.
- Communitarians and virtue ethics: they oppose liberal individualism, ascribing value to the
community itself and see individuals as members of the community as being shaped by it.
A social ecology is about relations between various entities, human and non-human, which are
interdependent and adapt to one another. The relations are morally significant and moral
consideration cannot be conceived apart from these relations.
Soft rights: rights given to some robots on account of their participation in the social life. Rights
appear as meta-properties: moral properties based on properties such as consciousness or
sentience. The relational approach suggests that we should not assume that there is a kind of moral
backpack attached to the entity in question, but that moral consideration is granted within a
dynamic relation between humans.
Ethical values and social care robots for older people: an international
qualitative study
Draper, H. & Sorell, T. (2017)
The value framework consists of autonomy, safety, enablement, independence, privacy and social
connectedness. The value framework should be reflected in the design of social robots. The same
value framework should be used to introduce robots into homes. The values however could be in
tension. Focus groups were held with older people, informal care takers and formal carers of the
elderly. Participants tended to agree that autonomy often has priority over other values, with
exception in certain cases to safety.
Concerns were not about the safety of the robot, but rather how safe it is to replace human
judgement with robotic programming. The findings echo the concern expressed more widely that
roots should not be used to replace human-human interaction. The findings also reinforce concerns
that robot care may increase social exclusion. Whilst the potential tirelessness of the robot
overcomes challenges to patience of human-human interaction, it can be associated with coercion,
which acts against autonomy. Persuasion by contrast facilitates autonomy. Acceptable enablement
is constrained by the need to change the behaviour of users in some cases whilst continuing to
acknowledge their capacity for autonomous decision-making. The role of the robot is crucial to the
norms it has to follow. The greater the variety of interactions, the greater the confusion of which
norms to apply. The potential confusion may also encourage ‘slippage’ in that the older person, and
others involved, may be inclined to manipulate the norms to de-emphasize enablement and
independence. Devices simpler than companion robots might pre-empt this problem, but at the
costs of eliminating presence in the life of older persons.
Some formal carers raised the issue that the robot can be used to spy on the elderly, whereas others
rather used the term checking and reinforcing adherence to treatment regimes. It is best that to
ensure that privacy norms are respected and the older person retains control of information that the
robot gathers.
There must be a shared understanding of the role, capabilities and potential behaviours of the robot.
A critical stage of operationalization of the value framework. Is the introduction of the robot in the
home for the first time. The value framework suggests that this should be an process rather than an
event. Agreements between providers and receivers of care have to be reached for tensions in