Open this publication in new window or tab >>2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
This thesis addresses the unresolved issues of responsibility and accountability in autonomous vehicle (AV) development, advocating for human-centred approaches to enhance trustworthiness. While AVs hold the potential for improved safety, mobility, and environmental impact, poorly designed algorithms pose risks, leading to public distrust. Trust research focuses on technology-related aspects but overlooks trust within broader social and cultural contexts. Efforts are underway to understand algorithm design practices, acknowledging their potential unintended consequences. For example, Baumer (2017) advocates human-centred algorithm design (HCAD) to align with user perspectives and reduce risks. HCAD incorporates theoretical, participatory, and speculative approaches, emphasising user and stakeholder engagement. This aligns with broader calls for prioritising societal considerations in technology development (Stilgoe, 2013). The research in this thesis responds to these calls by integrating theories on trust and trustworthiness, autonomous vehicle development, and human-centred approaches in empirical investigations guided by the following research question: “How can human-centred approaches support the development of trustworthy intelligent vehicle technology?” This thesis approaches the question through design ethnography to ground the explorations in people’s real-life routines, practices and anticipations and demonstrate how design ethnographic techniques can infuse AV development with human-centred understandings of people’s trust in AVs. The studies reported in this thesis include a) interviews and participatory observations of algorithm designers, b) interviews and probing with residents, and c) staging collaborative, reflective practice through the design ethnographic materials and co-creation with citizens, city, academic and industry stakeholders, including AV algorithm designers.
Through these empirical explorations, this thesis suggests an answer to the research question by coining a novel and timely framework for intelligent vehicle development: trustworthy algorithm design (TAD). TAD demonstrates trustworthiness as an ongoing process, not just a measurable outcome from human-technology interactions. It calls to consider autonomous vehicle algorithms as construed through a network of stakeholders, practices, and technologies and, therefore, defines trustworthy algorithm design as a continuous collaborative learning and evolvement process of different disciplines and sectors. Furthermore, the TAD framework suggests that for autonomous vehicle algorithm design to be trustworthy, it must be responsive, interventional, intentional and transdisciplinary.
The TAD framework integrates ideas and strategies from different well-known trajectories of research in the field of responsible and human-centred technology development: Human-Centred Algorithm Design (Baumer, 2017), algorithms as culture (Seaver, 2017) and Responsible Innovation (Stilgoe et al., 2013). The thesis contributes to this field by empirically investigating how this integrated framework helps expand existing understandings of interactional trust in intelligent technologies and include the relevance of participatory processes of trustworthiness and how these processes are nurtured through cross-sector co-learning and design ethnographic materials.
Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2023. p. 94
Series
Halmstad University Dissertations ; 103
Keywords
Trustworthy Algorithm Design, Trust, Design Ethnography, Autonomous Vehicles
National Category
Human Computer Interaction Information Studies Information Systems, Social aspects
Identifiers
urn:nbn:se:hh:diva-51930 (URN)978-91-89587-20-5 (ISBN)978-91-89587-19-9 (ISBN)
Public defence
2023-12-08, S1020, Kristian IV:s väg 3, Halmstad, 13:15 (English)
Opponent
Supervisors
Projects
2017-03058_Vinnova / Trust in Intelligent Cars (TIC)
Funder
Vinnova, 2019-04786Vinnova, 2018-02088Vinnova, 2017-03058
2023-11-102023-11-082023-12-01Bibliographically approved