hh.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A New Bandit Setting Balancing Information from State Evolution and Corrupted Context
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0002-7453-9186
Högskolan i Halmstad, Akademin för informationsteknologi, Centrum för forskning om tillämpade intelligenta system (CAISR).ORCID-id: 0000-0002-7796-5201
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0003-1145-4297
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Abstract [en]

We propose a new sequential decision-making setting, combining key aspects of two established online learning problems with bandit feedback. The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent. Each state is associated with a context distribution, possibly corrupted, allowing the agent to identify the state. Furthermore, states evolve in a Markovian fashion, providing useful information to estimate the current state via state history. In the proposed problem setting, we tackle the challenge of deciding on which of the two sources of information the agent should base its arm selection. We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit. We capture the time-correlation of states through iteratively learning the action-reward transition model, allowing for efficient exploration of actions. Our setting is motivated by adaptive mobile health (mHealth) interventions. Users transition through different, time-correlated, but only partially observable internal states, determining their current needs. The side information associated with each internal state might not always be reliable, and standard approaches solely rely on the context risk of incurring high regret. Similarly, some users might exhibit weaker correlations between subsequent states, leading to approaches that solely rely on state transitions risking the same. We analyze our setting and algorithm in terms of regret lower bound and upper bounds and evaluate our method on simulated medication adherence intervention data and several real-world data sets, showing improved empirical performance compared to several popular algorithms. 

Emneord [en]
Multi-Armed-Bandit, Contextual Bandit, Sequential Decision Making, Markov property, Non-stationary
HSV kategori
Identifikatorer
URN: urn:nbn:se:hh:diva-52137OAI: oai:DiVA.org:hh-52137DiVA, id: diva2:1815667
Forskningsfinansiär
Vinnova, 2017-04617Tilgjengelig fra: 2023-11-29 Laget: 2023-11-29 Sist oppdatert: 2023-11-29bibliografisk kontrollert
Inngår i avhandling
1. Mobile Health Interventions through Reinforcement Learning
Åpne denne publikasjonen i ny fane eller vindu >>Mobile Health Interventions through Reinforcement Learning
2023 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

This thesis presents work conducted in the domain of sequential decision-making in general and Bandit problems in particular, tackling challenges from a practical and theoretical perspective, framed in the contexts of mobile Health. The early stages of this work have been conducted in the context of the project ``improving Medication Adherence through Person-Centred Care and Adaptive Interventions'' (iMedA) which aims to provide personalized adaptive interventions to hypertensive patients, supporting them in managing their medication regimen. The focus lies on inadequate medication adherence (MA), a pervasive issue where patients do not take their medication as instructed by their physician. The selection of individuals for intervention through secondary database analysis on Electronic Health Records (EHRs) was a key challenge and is addressed through in-depth analysis of common adherence measures, development of prediction models for MA, and discussions on limitations of such approaches for analyzing MA. Providing personalized adaptive interventions is framed in several bandit settings and addresses the challenge of delivering relevant interventions in environments where contextual information is unreliable and full of noise. Furthermore, the need for good initial policies is explored and improved in the latent-bandits setting, utilizing prior collected data to optimal selection the best intervention at every decision point. As the final concluding work, this thesis elaborates on the need for privacy and explores different privatization techniques in the form of noise-additive strategies using a realistic recommendation scenario.         

The contributions of the thesis can be summarised as follows: (1) Highlighting the issues encountered in measuring MA through secondary database analysis and providing recommendations to address these issues, (2) Investigating machine learning models developed using EHRs for MA prediction and extraction of common refilling patterns through EHRs, (3) formal problem definition for a novel contextual bandit setting with context uncertainty commonly encountered in Mobile Health and development of an algorithm designed for such environments. (4) Algorithmic improvements, equipping the agent with information-gathering capabilities for active action selection in the latent bandit setting, and (5) exploring important privacy aspects using a realistic recommender scenario.   

sted, utgiver, år, opplag, sider
Halmstad: Halmstad University Press, 2023. s. 56
Serie
Halmstad University Dissertations ; 102
HSV kategori
Forskningsprogram
Hälsoinnovation, IDC
Identifikatorer
urn:nbn:se:hh:diva-52139 (URN)978-91-89587-17-5 (ISBN)978-91-89587-16-8 (ISBN)
Disputas
2023-12-15, S1002, Kristian IV:s väg 3, Halmstad, 13:00 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2023-11-29 Laget: 2023-11-29 Sist oppdatert: 2024-01-03bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

FullText

Person

Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias

Søk i DiVA

Av forfatter/redaktør
Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 171 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf