hh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 15) Show all publications
Cooney, M., Ong, L., Pashami, S., Järpe, E. & Ashfaq, A. (2019). Avoiding improper treatment of dementia patients by care robots. In: : . Paper presented at The Dark Side of Human-Robot Interaction: Ethical Considerations and Community Guidelines for the Field of HRI. HRI Workshop, Daegu, South Korea, March 11, 2019.
Open this publication in new window or tab >>Avoiding improper treatment of dementia patients by care robots
Show others...
2019 (English)Conference paper, Published paper (Refereed)
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-39448 (URN)
Conference
The Dark Side of Human-Robot Interaction: Ethical Considerations and Community Guidelines for the Field of HRI. HRI Workshop, Daegu, South Korea, March 11, 2019
Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2019-09-16
Cooney, M. & Berck, P. (2019). Designing a Robot Which Paints With a Human: Visual Metaphors to Convey Contingency and Artistry. In: : . Paper presented at ICRA-X Robotic Art Forum, May 20-22, 2019, Montreal, Canada.
Open this publication in new window or tab >>Designing a Robot Which Paints With a Human: Visual Metaphors to Convey Contingency and Artistry
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Socially assistive robots could contribute to fulfilling an important need for interaction in contexts where human caregivers are scarce–such as art therapy, where peers, or patients and therapists, can make art together. However, current art-making robots typically generate art either by themselves, or as tools under the control of a human artist; how to make art together with a human in a good way has not yet received much attention, possibly because some concepts related to art, such as emotion and creativity, are not yet well understood. The current work reports on our use of a collaborative prototyping approach to explore this concept of a robot which can paint together with people. The result is a proposed design, based on an idea of using visual metaphors to convey contingency and artistry. Our aim is that the identified considerations will help support next steps, toward supporting positive experiences for people through art-making with a robot.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-39447 (URN)
Conference
ICRA-X Robotic Art Forum, May 20-22, 2019, Montreal, Canada
Funder
Knowledge Foundation, 20140220
Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2019-08-12Bibliographically approved
Cooney, M. & Leister, W. (2019). Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students. Robotics, 8(1), Article ID 21.
Open this publication in new window or tab >>Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students
2019 (English)In: Robotics, E-ISSN 2218-6581, Vol. 8, no 1, article id 21Article in journal (Refereed) Published
Abstract [en]

We report on an exploratory study conducted at a graduate school in Sweden with a humanoid robot, Baxter. First, we describe a list of potentially useful capabilities for a robot teaching assistant derived from brainstorming and interviews with faculty members, teachers, and students. These capabilities consist of reading educational materials out loud, greeting, alerting, allowing remote operation, providing clarifications, and moving to carry out physical tasks. Secondly, we present feedback on how the robot's capabilities, demonstrated in part with the Wizard of Oz approach, were perceived, and iteratively adapted over the course of several lectures, using the EngagementProfile tool. Thirdly, we discuss observations regarding the capabilities and the development process. Our findings suggest that using a social robot as a teachingassistant is promising using the chosen capabilities and Engagement Profile tool. We find that enhancing the robot's autonomous capabilities and further investigating the role of embodiment are some important topics to be considered in future work. © 2019 by the authors.

Place, publisher, year, edition, pages
Basel: MDPI, 2019
Keywords
Evaluation, Robot, Robotic teaching assistant, Teaching, User engagement
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-39446 (URN)10.3390/robotics8010021 (DOI)000464266600001 ()2-s2.0-85063490169 (Scopus ID)
Note

The first author received funding from the Swedish Knowledge Foundation (Sidus AIR no. 20140220 and CAISR 2010/0271) and also some travel funding from the REMIND project (H2020-MSCARISE No 734355). The Engagement Profile has been developed in the context of the project VISITORENGAGEMENT funded by the Research Council of Norway in the BIA programme, grant number 228737

Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2019-05-28Bibliographically approved
Cooney, M. & Menezes, M. L. (2018). Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot. Multimodal Technologies Interact. Special Issue Emotions in Robots: Embodied Interaction in Social and Non-Social Environments, 2(3), Article ID 52.
Open this publication in new window or tab >>Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot
2018 (English)In: Multimodal Technologies Interact. Special Issue Emotions in Robots: Embodied Interaction in Social and Non-Social Environments, ISSN 2414-4088, Vol. 2, no 3, article id 52Article in journal (Refereed) Published
Abstract [en]

Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.

Place, publisher, year, edition, pages
Basel: MDPI, 2018
Keywords
social robots, art therapy, emotions, creativity, art robots, therapy robots
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-37884 (URN)10.3390/mti2030052 (DOI)
Funder
Knowledge Foundation, SIDUS AIR 20140220
Available from: 2018-09-03 Created: 2018-09-03 Last updated: 2019-05-23Bibliographically approved
Cooney, M., Pashami, S., Pinheiro Sant'Anna, A., Fan, Y. & Nowaczyk, S. (2018). Pitfalls of Affective Computing: How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?. In: WWW '18 Companion Proceedings of the The Web Conference 2018: . Paper presented at The Web Conference 2018 (WWW '18), Lyon, France, April 23-27, 2018 (pp. 1563-1566). New York, NY: ACM Publications
Open this publication in new window or tab >>Pitfalls of Affective Computing: How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?
Show others...
2018 (English)In: WWW '18 Companion Proceedings of the The Web Conference 2018, New York, NY: ACM Publications, 2018, p. 1563-1566Conference paper, Published paper (Refereed)
Abstract [en]

What would happen in a world where people could "see'' others' hidden emotions directly through some visualizing technology Would lies become uncommon and would we understand each other better Or to the contrary, would such forced honesty make it impossible for a society to exist The science fiction television show Black Mirror has exposed a number of darker scenarios in which such futuristic technologies, by blurring the lines of what is private and what is not, could also catalyze suffering. Thus, the current paper first turns an eye towards identifying some potential pitfalls in emotion visualization which could lead to psychological or physical harm, miscommunication, and disempowerment. Then, some countermeasures are proposed and discussed--including some level of control over what is visualized and provision of suitably rich emotional information comprising intentions--toward facilitating a future in which emotion visualization could contribute toward people's well-being. The scenarios presented here are not limited to web technologies, since one typically thinks about emotion recognition primarily in the context of direct contact. However, as interfaces develop beyond today's keyboard and monitor, more information becomes available also at a distance--for example, speech-to-text software could evolve to annotate any dictated text with a speaker's emotional state.

Place, publisher, year, edition, pages
New York, NY: ACM Publications, 2018
Keywords
Affective computing, emotion visualization, Black Mirror, privacy, ethics, intention recognition
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-37664 (URN)10.1145/3184558.3191611 (DOI)
Conference
The Web Conference 2018 (WWW '18), Lyon, France, April 23-27, 2018
Projects
CAISR 2010/0271
Funder
Knowledge Foundation, CAISR 2010/0271
Note

Funding: Swedish Knowledge Foundation (CAISR 2010/0271 and Sidus AIR no. 20140220)

Available from: 2018-07-25 Created: 2018-07-25 Last updated: 2019-04-12Bibliographically approved
Cooney, M., Yang, C., Arunesh, S., Padi Siva, A. & David, J. (2018). Teaching Robotics with Robot Operating System (ROS): A Behavior Model Perspective. In: : . Paper presented at Workshop on “Teaching Robotics with ROS”, European Robotics Forum 2018, Tampere, Finland, March 15, 2018.
Open this publication in new window or tab >>Teaching Robotics with Robot Operating System (ROS): A Behavior Model Perspective
Show others...
2018 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

Robotics skills are in high demand, but learning robotics can be difficult due to the wide range of required knowledge, increasingly complex and diverse platforms, and components requiring dedicated software. One way to mitigate such problems is by utilizing a standard framework such as Robot Operating System (ROS), which facilitates development through the reuse of opensource code—a challenge is that learning curves can be steep for students who are also first-time users. In the current paper, we suggest the use of a behavior model to structure the learning of complex frameworks like ROS in an engaging way. A practical example is provided, of integrating ROS into a robotics course called the “Design of Embedded and Intelligent Systems” (DEIS), along with feedback suggesting that some students responded positively to learning experiences enabled by our approach. Furthermore, some course materials, videos, and code have been made available online, which we hope might provide useful insights.

Keywords
Robotics Teaching, ROS, Behavior Model
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-37665 (URN)
Conference
Workshop on “Teaching Robotics with ROS”, European Robotics Forum 2018, Tampere, Finland, March 15, 2018
Projects
Sidus AIR 20140220
Funder
Knowledge Foundation, CAISR 2010/0271
Available from: 2018-07-25 Created: 2018-07-25 Last updated: 2018-10-31Bibliographically approved
Cooney, M. & Sant'Anna, A. (2017). Avoiding Playfulness Gone Wrong: Exploring Multi-objective Reaching Motion Generation in a Social Robot. International Journal of Social Robotics, 9(4), 545-562
Open this publication in new window or tab >>Avoiding Playfulness Gone Wrong: Exploring Multi-objective Reaching Motion Generation in a Social Robot
2017 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 9, no 4, p. 545-562Article in journal (Refereed) Published
Abstract [en]

Companion robots will be able to perform useful tasks in homes and public places, while also providing entertainment through playful interactions. “Playful” here means fun, happy, and humorous. A challenge is that generating playful motions requires a non-trivial understanding of how people attribute meaning and intentions. The literature suggests that playfulness can lead to some undesired impressions such as that a robot is obnoxious, untrustworthy, unsafe, moving in a meaningless fashion, or boring. To generate playfulness while avoiding such typical failures, we proposed a model for the scenario of a robot arm reaching for an object: some simplified movement patterns such as sinusoids are structured toward appearing helpful, clear about goals, safe, and combining a degree of structure and anomaly. We integrated our model into a mathematical framework (CHOMP) and built a new robot, Kakapo, to perform dynamically generated motions. The results of an exploratory user experiment were positive, suggesting that: Our proposed system was perceived as playful over the course of several minutes. Also a better impression resulted compared with an alternative playful system which did not use our proposed heuristics; furthermore a negative effect was observed for several minutes after showing the alternative motions, suggesting that failures are important to avoid. And, an inverted u-shaped correlation was observed between motion length and degree of perceived playfulness, suggesting that motions should neither be too short or too long and that length is also a factor which can be considered when generating playful motions. A short follow-up study provided some additional support for the idea that playful motions which seek to avoid failures can be perceived positively. Our intent is that these exploratory results will provide some insight for designing various playful robot motions, toward achieving some good interactions. © 2017, The Author(s).

Place, publisher, year, edition, pages
Dordrecht: Springer Netherlands, 2017
Keywords
entertainment robotics, motion generation, social robotics, playfulness, reaching
National Category
Environmental Sciences
Identifiers
urn:nbn:se:hh:diva-35044 (URN)10.1007/s12369-017-0411-1 (DOI)000408405800008 ()2-s2.0-85028359414 (Scopus ID)
Available from: 2017-09-20 Created: 2017-09-20 Last updated: 2017-09-21Bibliographically approved
Cooney, M. & Bigun, J. (2017). PastVision: Exploring “Seeing” into the Near Past with Thermal Touch Sensing and Object Detection – For Robot Monitoring of Medicine Intake by Dementia Patients. In: : . Paper presented at 30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden (pp. 30-38). Linköping: Linköping University Electronic Press, Article ID 003.
Open this publication in new window or tab >>PastVision: Exploring “Seeing” into the Near Past with Thermal Touch Sensing and Object Detection – For Robot Monitoring of Medicine Intake by Dementia Patients
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present PastVision, a proof-of-concept approach that explores combining thermal touch sensing and object detection to infer recent actions by a person which have not been directly observed by a system. Inferring such past actions has received little attention yet in the literature, but would be highly useful in scenarios in which sensing can fail (e.g., due to occlusions) and the cost of not recognizing an action is high. In particular, we focus on one such application, involving a robot which should monitor if an elderly person with dementia has taken medicine. For this application, we explore how to combine detection of touches and objects, as well as how heat traces vary based on materials and a person’s grip, and how robot motions and activity models can be leveraged. The observed results indicate promise for the proposed approach.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2017
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 137
Keywords
Thermal Sensing, Home Robots, Action Recognition, Monitoring
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-35045 (URN)
Conference
30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden
Available from: 2017-09-20 Created: 2017-09-20 Last updated: 2018-02-05Bibliographically approved
Cooney, M. & Bigun, J. (2017). PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips. Frontiers in Robotics and AI, 4, Article ID 61.
Open this publication in new window or tab >>PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips
2017 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 4, article id 61Article in journal (Refereed) Published
Abstract [en]

This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.

Place, publisher, year, edition, pages
Lausanne: Frontiers Media S.A., 2017
Keywords
thermovisual inference, touch detection, medicine intake, action recognition, monitoring, near past inference
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-35592 (URN)10.3389/frobt.2017.00061 (DOI)000415716600001 ()
Available from: 2017-12-05 Created: 2017-12-05 Last updated: 2018-01-13Bibliographically approved
Lundström, J., Ourique de Morais, W. & Cooney, M. (2015). A Holistic Smart Home Demonstrator for Anomaly Detection and Response. In: 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops): . Paper presented at SmartE: Closing the Loop – The 2nd IEEE PerCom Workshop on Smart Environments, St. Louis, Missouri, USA, March 23-27, 2015 (pp. 330-335). Piscataway, NJ: IEEE Press
Open this publication in new window or tab >>A Holistic Smart Home Demonstrator for Anomaly Detection and Response
2015 (English)In: 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), Piscataway, NJ: IEEE Press, 2015, p. 330-335Conference paper, Published paper (Refereed)
Abstract [en]

Applying machine learning methods in scenarios involving smart homes is a complex task. The many possible variations of sensors, feature representations, machine learning algorithms, middle-ware architectures, reasoning/decision schemes, and interactive strategies make research and development tasks non-trivial to solve.In this paper, the use of a portable, flexible and holistic smart home demonstrator is proposed to facilitate iterative development and the acquisition of feedback when testing in regard to the above-mentioned issues. Specifically, the focus in this paper is on scenarios involving anomaly detection and response. First a model for anomaly detection is trained with simulated data representing a priori knowledge pertaining to a person living in an apartment. Then a reasoning mechanism uses the trained model to infer and plan a reaction to deviating activities. Reactions are carried out by a mobile interactive robot to investigate if a detected anomaly constitutes a true emergency. The implemented demonstrator was able to detect and respond properly in 18 of 20 trials featuring normal and deviating activity patterns, suggesting the feasibility of the proposed approach for such scenarios. © IEEE 2015

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Press, 2015
National Category
Signal Processing Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:hh:diva-27740 (URN)10.1109/PERCOMW.2015.7134058 (DOI)000380510900075 ()2-s2.0-84946061065 (Scopus ID)978-1-4799-8425-1 (ISBN)
Conference
SmartE: Closing the Loop – The 2nd IEEE PerCom Workshop on Smart Environments, St. Louis, Missouri, USA, March 23-27, 2015
Projects
SA3L, CAISR
Funder
Knowledge Foundation
Available from: 2015-05-26 Created: 2015-02-09 Last updated: 2018-03-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4998-1685

Search in DiVA

Show all publications