hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mitigation measures for addressing gender bias in artificial intelligence within healthcare settings: a critical area of sociological inquiry
Halmstad University, School of Health and Welfare.ORCID iD: 0000-0002-3720-693X
2024 (English)In: AI & Society: The Journal of Human-Centred Systems and Machine Intelligence, ISSN 0951-5666, E-ISSN 1435-5655Article in journal (Refereed) Epub ahead of print
Abstract [en]

Artificial intelligence (AI) is often described as crucial for making healthcare safer and more efficient. However, some studies point in the opposite direction, demonstrating how biases in AI cause inequalities and discrimination. As a result, a growing body of research suggests mitigation measures to avoid gender bias. Typically, mitigation measures address various stakeholders such as the industry, academia, and policy-makers. To the author’s knowledge, these have not undergone sociological analysis. The article fills this gap and explores five examples of mitigation measures designed to counteract gender bias in AI within the healthcare sector. The rapid development of AI in healthcare plays a crucial role globally and must refrain from creating or reinforcing inequality and discrimination. In this effort, mitigation measures to avoid gender bias in AI in healthcare are central tools and, therefore, essential to explore from a social science perspective, including sociology. Sociologists have made valuable contributions to studying inequalities and disparities in AI. However, research has pointed out that more engagement is needed, specifically regarding bias in AI. While acknowledging the importance of these measures, the article suggests that they lack accountable agents for implementation and overlook potential implementation barriers such as resistance, power relations, and knowledge hierarchies. Recognizing the conditions where the mitigation measures are to be implemented is essential for understanding the potential challenges that may arise. Consequently, more studies are needed to explore the practical implementation of mitigation measures from a social science perspective and a systematic review of mitigation measures. © The Author(s) 2024.

Place, publisher, year, edition, pages
London: Springer London, 2024.
Keywords [en]
Artifcial intelligence, Gender bias, Mitigation measures, Healthcare sector
National Category
Sociology
Identifiers
URN: urn:nbn:se:hh:diva-54638DOI: 10.1007/s00146-024-02067-yISI: 001315047500001Scopus ID: 2-s2.0-85204407497&OAI: oai:DiVA.org:hh-54638DiVA, id: diva2:1899783
Funder
Åke Wiberg Foundation, H22-0022Halmstad UniversityAvailable from: 2024-09-20 Created: 2024-09-20 Last updated: 2024-10-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Isaksson, Anna

Search in DiVA

By author/editor
Isaksson, Anna
By organisation
School of Health and Welfare
In the same journal
AI & Society: The Journal of Human-Centred Systems and Machine Intelligence
Sociology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 80 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf