hh.sePublications
Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Cooney, Martin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Robot Art, in the Eye of the Beholder?: Personalized Metaphors Facilitate Communication of Emotions and Creativity2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 668986Article in journal (Refereed)
    Abstract [en]

    Socially assistive robots are being designed to support people's well-being in contexts such as art therapy where human therapists are scarce, by making art together with people in an appropriate way. A challenge is that various complex and idiosyncratic concepts relating to art, like emotions and creativity, are not yet well understood. Guided by the principles of speculative design, the current article describes the use of a collaborative prototyping approach involving artists and engineers to explore this design space, especially in regard to general and personalized art-making strategies. This led to identifying a goal: to generate representational or abstract art that connects emotionally with people's art and shows creativity. For this, an approach involving personalized "visual metaphors" was proposed, which balances the degree to which a robot's art is influenced by interacting persons. The results of a small user study via a survey provided further insight into people's perceptions: the general design was perceived as intended and appealed; as well, personalization via representational symbols appeared to lead to easier and clearer communication of emotions than via abstract symbols. In closing, the article describes a simplified demo, and discusses future challenges. Thus, the contribution of the current work lies in suggesting how a robot can seek to interact with people in an emotional and creative way through personalized art; thereby, the aim is to stimulate ideation in this promising area and facilitate acceptance of such robots in everyday human environments. © 2021 Cooney. 

  • 2.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips2017In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 4, article id 61Article in journal (Refereed)
    Abstract [en]

    This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.

    Download full text (pdf)
    fulltext
  • 3.
    Fabricius, Victor
    et al.
    Halmstad University, School of Information Technology. RISE Research Institutes of Sweden, Gothenburg, Sweden.
    Habibovic, Azra
    Scania CV AB, Södertälje, Sweden.
    Rizgary, Daban
    RISE Research Institutes of Sweden, Gothenburg, Sweden.
    Andersson, Jonas
    RISE Research Institutes of Sweden, Gothenburg, Sweden.
    Wärnestål, Pontus
    Halmstad University, School of Information Technology.
    Interactions Between Heavy Trucks and Vulnerable Road Users – A Systematic Review to Inform the Interactive Capabilities of Highly Automated Trucks2022In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 818019Article in journal (Refereed)
    Abstract [en]

    This study investigates interactive behaviors and communication cues of heavy goods vehicles (HGVs) and vulnerable road users (VRUs) such as pedestrians and cyclists as a means of informing the interactive capabilities of highly automated HGVs. Following a general framing of road traffic interaction, we conducted a systematic literature review of empirical HGV-VRU studies found through the databases Scopus, ScienceDirect and TRID. We extracted reports of interactive road user behaviors and communication cues from 19 eligible studies and categorized these into two groups: 1) the associated communication channel/mechanism (e.g., nonverbal behavior), and 2) the type of communication cue (implicit/explicit). We found the following interactive behaviors and communication cues: 1) vehicle-centric (e.g., HGV as a larger vehicle, adapting trajectory, position relative to the VRU, timing of acceleration to pass the VRU, displaying information via human-machine interface), 2) driver-centric (e.g., professional driver, present inside/outside the cabin, eye-gaze behavior), and 3) VRU-centric (e.g., racer cyclist, adapting trajectory, position relative to the HGV, proximity to other VRUs, eye-gaze behavior). These cues are predominantly based on road user trajectories and movements (i.e., kinesics/proxemics nonverbal behavior) forming implicit communication, which indicates that this is the primary mechanism for HGV-VRU interactions. However, there are also reports of more explicit cues such as cyclists waving to say thanks, the use of turning indicators, or new types of external human-machine interfaces (eHMI). Compared to corresponding scenarios with light vehicles, HGV-VRU interaction patterns are to a high extent formed by the HGV’s size, shape and weight. For example, this can cause VRUs to feel less safe, drivers to seek to avoid unnecessary decelerations and accelerations, or lead to strategic behaviors due to larger blind-spots. Based on these findings, it is likely that road user trajectories and kinematic behaviors will form the basis for communication also for highly automated HGV-VRU interaction. However, it might also be beneficial to use additional eHMI to compensate for the loss of more social driver-centric cues or to signal other types of information. While controlled experiments can be used to gather such initial insights, deeper understanding of highly automated HGV-VRU interactions will also require naturalistic studies. © 2022 Fabricius, Habibovic, Rizgary, Andersson and Wärnestål.

1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf