hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robot Self-defense: Robot, Don't Hurt Me, No More
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-5100-6435
Advanced Telecommunications Research Institute International, Kyoto, Japan.
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-4894-4134
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-4998-1685
2022 (English)In: HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, IEEE Press, 2022, p. 742-745Conference paper, Published paper (Refereed)
Abstract [en]

Would it be okay for a robot to hurt a human, if by doing so it could protect someone else? Such ethical questions could be vital to consider, as the market for social robots grows larger and robots become increasingly prevalent in our surroundings. Here we introduce the topic of “robot self-defense”, which involves the use of force by a robot in response to violence, to protect a human in its care. To explore this topic, we conducted a preliminary analysis of the literature, as well as brainstorming sessions, which led us to formulate an idea about how people will perceive robot self-defense based on the perceived risk of loss. Additionally, we propose a study design to investigate how the general public will perceive the acceptability of a robot using self- defense techniques. As part of this, we describe some hypotheses based on the assumption that the perceived acceptability will be affected by both the entities involved in a violent situation and the amount of force that is applied. The proposed scenarios will be used in a future survey to evaluate participants’ perception of a social robot using self-defense techniques under varying circumstances, toward stimulating ideation and discussion on how robots will be able to help people to live better lives. © 2022 IEEE.

Place, publisher, year, edition, pages
IEEE Press, 2022. p. 742-745
Keywords [en]
robot self-defense, acceptability, robot ethics, self-defense, violence
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:hh:diva-46452Scopus ID: 2-s2.0-85140737841ISBN: 978-1-6654-0731-1 (electronic)ISBN: 978-1-6654-0732-8 (print)OAI: oai:DiVA.org:hh-46452DiVA, id: diva2:1644059
Conference
HRI '22 – the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo, Hokkaido, Japan, March 7-10, 2022
Projects
Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmartEmergency Vehicle Traffic Light Pre-emption in Cities – EPIC
Funder
Knowledge FoundationVinnovaELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

Funding: JST CREST Grant Number JPMJCR18A1, Japan, and from the Swedish Knowledge Foundation, the Swedish Innovation Agency (VINNOVA), and the ELLIIT Strategic Research Network.

Available from: 2022-03-12 Created: 2022-03-12 Last updated: 2023-01-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusFull text

Authority records

Kochenborger Duarte, EduardoVinel, AlexeyCooney, Martin

Search in DiVA

By author/editor
Kochenborger Duarte, EduardoVinel, AlexeyCooney, Martin
By organisation
School of Information Technology
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 135 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf