hh.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A New Bandit Setting Balancing Information from State Evolution and Corrupted Context
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0002-7453-9186
Högskolan i Halmstad, Akademin för informationsteknologi, Centrum för forskning om tillämpade intelligenta system (CAISR).ORCID-id: 0000-0002-7796-5201
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0003-1145-4297
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

We propose a new sequential decision-making setting, combining key aspects of two established online learning problems with bandit feedback. The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent. Each state is associated with a context distribution, possibly corrupted, allowing the agent to identify the state. Furthermore, states evolve in a Markovian fashion, providing useful information to estimate the current state via state history. In the proposed problem setting, we tackle the challenge of deciding on which of the two sources of information the agent should base its arm selection. We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit. We capture the time-correlation of states through iteratively learning the action-reward transition model, allowing for efficient exploration of actions. Our setting is motivated by adaptive mobile health (mHealth) interventions. Users transition through different, time-correlated, but only partially observable internal states, determining their current needs. The side information associated with each internal state might not always be reliable, and standard approaches solely rely on the context risk of incurring high regret. Similarly, some users might exhibit weaker correlations between subsequent states, leading to approaches that solely rely on state transitions risking the same. We analyze our setting and algorithm in terms of regret lower bound and upper bounds and evaluate our method on simulated medication adherence intervention data and several real-world data sets, showing improved empirical performance compared to several popular algorithms. 

Nyckelord [en]
Multi-Armed-Bandit, Contextual Bandit, Sequential Decision Making, Markov property, Non-stationary
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:hh:diva-52137OAI: oai:DiVA.org:hh-52137DiVA, id: diva2:1815667
Forskningsfinansiär
Vinnova, 2017-04617Tillgänglig från: 2023-11-29 Skapad: 2023-11-29 Senast uppdaterad: 2023-11-29Bibliografiskt granskad
Ingår i avhandling
1. Mobile Health Interventions through Reinforcement Learning
Öppna denna publikation i ny flik eller fönster >>Mobile Health Interventions through Reinforcement Learning
2023 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

This thesis presents work conducted in the domain of sequential decision-making in general and Bandit problems in particular, tackling challenges from a practical and theoretical perspective, framed in the contexts of mobile Health. The early stages of this work have been conducted in the context of the project ``improving Medication Adherence through Person-Centred Care and Adaptive Interventions'' (iMedA) which aims to provide personalized adaptive interventions to hypertensive patients, supporting them in managing their medication regimen. The focus lies on inadequate medication adherence (MA), a pervasive issue where patients do not take their medication as instructed by their physician. The selection of individuals for intervention through secondary database analysis on Electronic Health Records (EHRs) was a key challenge and is addressed through in-depth analysis of common adherence measures, development of prediction models for MA, and discussions on limitations of such approaches for analyzing MA. Providing personalized adaptive interventions is framed in several bandit settings and addresses the challenge of delivering relevant interventions in environments where contextual information is unreliable and full of noise. Furthermore, the need for good initial policies is explored and improved in the latent-bandits setting, utilizing prior collected data to optimal selection the best intervention at every decision point. As the final concluding work, this thesis elaborates on the need for privacy and explores different privatization techniques in the form of noise-additive strategies using a realistic recommendation scenario.         

The contributions of the thesis can be summarised as follows: (1) Highlighting the issues encountered in measuring MA through secondary database analysis and providing recommendations to address these issues, (2) Investigating machine learning models developed using EHRs for MA prediction and extraction of common refilling patterns through EHRs, (3) formal problem definition for a novel contextual bandit setting with context uncertainty commonly encountered in Mobile Health and development of an algorithm designed for such environments. (4) Algorithmic improvements, equipping the agent with information-gathering capabilities for active action selection in the latent bandit setting, and (5) exploring important privacy aspects using a realistic recommender scenario.   

Ort, förlag, år, upplaga, sidor
Halmstad: Halmstad University Press, 2023. s. 56
Serie
Halmstad University Dissertations ; 102
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Hälsoinnovation, Informationsdriven vård
Identifikatorer
urn:nbn:se:hh:diva-52139 (URN)978-91-89587-17-5 (ISBN)978-91-89587-16-8 (ISBN)
Disputation
2023-12-15, S1002, Kristian IV:s väg 3, Halmstad, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2023-11-29 Skapad: 2023-11-29 Senast uppdaterad: 2024-01-03Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

FullText

Person

Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias

Sök vidare i DiVA

Av författaren/redaktören
Galozy, AlexanderNowaczyk, SławomirOhlsson, Mattias
Av organisationen
Akademin för informationsteknologiCentrum för forskning om tillämpade intelligenta system (CAISR)
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 74 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf