hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A New Bandit Setting Balancing Information from State Evolution and Corrupted Context
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-7453-9186
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).ORCID iD: 0000-0002-7796-5201
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-1145-4297
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We propose a new sequential decision-making setting, combining key aspects of two established online learning problems with bandit feedback. The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent. Each state is associated with a context distribution, possibly corrupted, allowing the agent to identify the state. Furthermore, states evolve in a Markovian fashion, providing useful information to estimate the current state via state history. In the proposed problem setting, we tackle the challenge of deciding on which of the two sources of information the agent should base its arm selection. We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit. We capture the time-correlation of states through iteratively learning the action-reward transition model, allowing for efficient exploration of actions. Our setting is motivated by adaptive mobile health (mHealth) interventions. Users transition through different, time-correlated, but only partially observable internal states, determining their current needs. The side information associated with each internal state might not always be reliable, and standard approaches solely rely on the context risk of incurring high regret. Similarly, some users might exhibit weaker correlations between subsequent states, leading to approaches that solely rely on state transitions risking the same. We analyze our setting and algorithm in terms of regret lower bound and upper bounds and evaluate our method on simulated medication adherence intervention data and several real-world data sets, showing improved empirical performance compared to several popular algorithms. 

Keywords [en]
Multi-Armed-Bandit, Contextual Bandit, Sequential Decision Making, Markov property, Non-stationary
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-52137OAI: oai:DiVA.org:hh-52137DiVA, id: diva2:1815667
Funder
Vinnova, 2017-04617Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2023-11-29Bibliographically approved
In thesis
1. Mobile Health Interventions through Reinforcement Learning
Open this publication in new window or tab >>Mobile Health Interventions through Reinforcement Learning
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis presents work conducted in the domain of sequential decision-making in general and Bandit problems in particular, tackling challenges from a practical and theoretical perspective, framed in the contexts of mobile Health. The early stages of this work have been conducted in the context of the project ``improving Medication Adherence through Person-Centred Care and Adaptive Interventions'' (iMedA) which aims to provide personalized adaptive interventions to hypertensive patients, supporting them in managing their medication regimen. The focus lies on inadequate medication adherence (MA), a pervasive issue where patients do not take their medication as instructed by their physician. The selection of individuals for intervention through secondary database analysis on Electronic Health Records (EHRs) was a key challenge and is addressed through in-depth analysis of common adherence measures, development of prediction models for MA, and discussions on limitations of such approaches for analyzing MA. Providing personalized adaptive interventions is framed in several bandit settings and addresses the challenge of delivering relevant interventions in environments where contextual information is unreliable and full of noise. Furthermore, the need for good initial policies is explored and improved in the latent-bandits setting, utilizing prior collected data to optimal selection the best intervention at every decision point. As the final concluding work, this thesis elaborates on the need for privacy and explores different privatization techniques in the form of noise-additive strategies using a realistic recommendation scenario.         

The contributions of the thesis can be summarised as follows: (1) Highlighting the issues encountered in measuring MA through secondary database analysis and providing recommendations to address these issues, (2) Investigating machine learning models developed using EHRs for MA prediction and extraction of common refilling patterns through EHRs, (3) formal problem definition for a novel contextual bandit setting with context uncertainty commonly encountered in Mobile Health and development of an algorithm designed for such environments. (4) Algorithmic improvements, equipping the agent with information-gathering capabilities for active action selection in the latent bandit setting, and (5) exploring important privacy aspects using a realistic recommender scenario.   

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2023. p. 56
Series
Halmstad University Dissertations ; 102
National Category
Computer Sciences
Research subject
Health Innovation, Information driven care
Identifiers
urn:nbn:se:hh:diva-52139 (URN)978-91-89587-17-5 (ISBN)978-91-89587-16-8 (ISBN)
Public defence
2023-12-15, S1002, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2024-01-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

FullText

Authority records

Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias

Search in DiVA

By author/editor
Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias
By organisation
School of Information TechnologyCenter for Applied Intelligent Systems Research (CAISR)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 75 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf