hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mobile Health Interventions through Reinforcement Learning
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-7453-9186
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis presents work conducted in the domain of sequential decision-making in general and Bandit problems in particular, tackling challenges from a practical and theoretical perspective, framed in the contexts of mobile Health. The early stages of this work have been conducted in the context of the project ``improving Medication Adherence through Person-Centred Care and Adaptive Interventions'' (iMedA) which aims to provide personalized adaptive interventions to hypertensive patients, supporting them in managing their medication regimen. The focus lies on inadequate medication adherence (MA), a pervasive issue where patients do not take their medication as instructed by their physician. The selection of individuals for intervention through secondary database analysis on Electronic Health Records (EHRs) was a key challenge and is addressed through in-depth analysis of common adherence measures, development of prediction models for MA, and discussions on limitations of such approaches for analyzing MA. Providing personalized adaptive interventions is framed in several bandit settings and addresses the challenge of delivering relevant interventions in environments where contextual information is unreliable and full of noise. Furthermore, the need for good initial policies is explored and improved in the latent-bandits setting, utilizing prior collected data to optimal selection the best intervention at every decision point. As the final concluding work, this thesis elaborates on the need for privacy and explores different privatization techniques in the form of noise-additive strategies using a realistic recommendation scenario.         

The contributions of the thesis can be summarised as follows: (1) Highlighting the issues encountered in measuring MA through secondary database analysis and providing recommendations to address these issues, (2) Investigating machine learning models developed using EHRs for MA prediction and extraction of common refilling patterns through EHRs, (3) formal problem definition for a novel contextual bandit setting with context uncertainty commonly encountered in Mobile Health and development of an algorithm designed for such environments. (4) Algorithmic improvements, equipping the agent with information-gathering capabilities for active action selection in the latent bandit setting, and (5) exploring important privacy aspects using a realistic recommender scenario.   

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2023. , p. 56
Series
Halmstad University Dissertations ; 102
National Category
Computer Sciences
Research subject
Health Innovation, Information driven care
Identifiers
URN: urn:nbn:se:hh:diva-52139ISBN: 978-91-89587-17-5 (print)ISBN: 978-91-89587-16-8 (electronic)OAI: oai:DiVA.org:hh-52139DiVA, id: diva2:1815682
Public defence
2023-12-15, S1002, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2024-01-03Bibliographically approved
List of papers
1. Pitfalls of medication adherence approximation through EHR and pharmacy records: Definitions, data and computation
Open this publication in new window or tab >>Pitfalls of medication adherence approximation through EHR and pharmacy records: Definitions, data and computation
Show others...
2020 (English)In: International Journal of Medical Informatics, ISSN 1386-5056, E-ISSN 1872-8243, Vol. 136, article id 104092Article in journal (Refereed) Published
Abstract [en]

Background and purpose: Patients’ adherence to medication is a complex, multidimensional phenomenon. Dispensation data and electronic health records are used to approximate medication-taking through refill adherence. In-depth discussions on the adverse effects of data quality and computational differences are rare. The purpose of this article is to evaluate the impact of common pitfalls when computing medication adherence using electronic health records.

Procedures: We point out common pitfalls associated with the data and operationalization of adherence measures. We provide operational definitions of refill adherence and conduct experiments to determine the effect of the pitfalls on adherence estimations. We performed statistical significance testing on the impact of common pitfalls using a baseline scenario as reference.

Findings: Slight changes in definition can significantly skew refill adherence estimates. Pickup patterns cause significant disagreement between measures and the commonly used proportion of days covered. Common data related issues had a small but statistically significant (p < 0.05) impact on population-level and significant effect on individual cases.

Conclusion: Data-related issues encountered in real-world administrative databases, which affect various operational definitions of refill adherence differently, can significantly skew refill adherence values, leading to false conclusions about adherence, particularly when estimating adherence for individuals. © 2020 The Authors. Published by Elsevier B.V. 

Place, publisher, year, edition, pages
Shannon: Elsevier, 2020
Keywords
Medication refill adherence, Electronic health records, Data quality, Pitfalls
National Category
Other Medical Engineering
Identifiers
urn:nbn:se:hh:diva-41712 (URN)10.1016/j.ijmedinf.2020.104092 (DOI)32062562 (PubMedID)2-s2.0-85079281579 (Scopus ID)
Funder
Vinnova, 2017-04617
Note

Other funding: Health Technology Center and CAISR at Halmstad University and Halland's Hospital

Available from: 2020-02-25 Created: 2020-02-25 Last updated: 2023-11-29Bibliographically approved
2. Prediction and pattern analysis of medication refill adherence through electronic health records and dispensation data
Open this publication in new window or tab >>Prediction and pattern analysis of medication refill adherence through electronic health records and dispensation data
2020 (English)In: Journal of Biomedical Informatics: X, E-ISSN 2590-177X, Vol. 6-7, article id 100075Article in journal (Refereed) Published
Abstract [en]

Background and purpose

Low adherence to medication in chronic disease patients leads to increased morbidity, mortality, and healthcare costs. The widespread adoption of electronic prescription and dispensation records allows a more comprehensive overview of medication utilization. In combination with electronic health records (EHR), such data provides new opportunities for identifying patients at risk of nonadherence and provide more targeted and effective interventions. The purpose of this article is to study the predictability of medication adherence for a cohort of hypertensive patients, focusing on healthcare utilization factors under various predictive scenarios. Furthermore, we discover common proportion of days covered patterns (PDC-patterns) for patients with index prescriptions and simulate medication-taking behaviours that might explain observed patterns.

Procedures

We predict refill adherence focusing on factors of healthcare utilization, such as visits, prescription information and demographics of patient and prescriber. We train models with machine learning algorithms, using four different data splits: stratified random, patient, temporal forward prediction with and without index patients. We extract frequent, two-year long PDC-patterns using K-means clustering and investigate five simple models of medication-taking that can generate such PDC-patterns.

Findings

Model performance varies between data splits (AUC test set: 0.77–0.89). Including historical information increases the performance slightly in most cases (approx. 1–2% absolute AUC uplift). Models show low predictive performance (AUC test set: 0.56–0.66) on index-prescriptions and patients with sudden drops in PDC (Recall: 0.58–0.63). We find 21 distinct two-year PDC-patterns, ranging from good adherence to intermittent gaps and early discontinuation in the first or second year. Simulations show that observed PDC-patterns can only be explained by specific medication consumption behaviours.

Conclusions

Prediction models developed using EHR exhibit bias towards patients with high healthcare utilization. Even though actual medication-taking is not observable, consumption patterns may not be as arbitrary, provided that medication refilling and consumption is linked.  © 2020 The Authors. Published by Elsevier Inc.

Place, publisher, year, edition, pages
New York, NY: Elsevier, 2020
Keywords
Medication refill adherence, Electronic health records, Simulation, Prediction, Refill patterns
National Category
Signal Processing Pharmacology and Toxicology Computer Sciences
Identifiers
urn:nbn:se:hh:diva-43529 (URN)10.1016/j.yjbinx.2020.100075 (DOI)2-s2.0-85087509892 (Scopus ID)
Funder
Vinnova
Note

Funding: Vinnova, Health Technology Center and CAISR at Halmstad University and Hallands Hospital for financing the research work under the project iMedA [Grant No.: 2017-04617]. 

Available from: 2020-11-26 Created: 2020-11-26 Last updated: 2023-11-29Bibliographically approved
3. A New Bandit Setting Balancing Information from State Evolution and Corrupted Context
Open this publication in new window or tab >>A New Bandit Setting Balancing Information from State Evolution and Corrupted Context
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We propose a new sequential decision-making setting, combining key aspects of two established online learning problems with bandit feedback. The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent. Each state is associated with a context distribution, possibly corrupted, allowing the agent to identify the state. Furthermore, states evolve in a Markovian fashion, providing useful information to estimate the current state via state history. In the proposed problem setting, we tackle the challenge of deciding on which of the two sources of information the agent should base its arm selection. We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit. We capture the time-correlation of states through iteratively learning the action-reward transition model, allowing for efficient exploration of actions. Our setting is motivated by adaptive mobile health (mHealth) interventions. Users transition through different, time-correlated, but only partially observable internal states, determining their current needs. The side information associated with each internal state might not always be reliable, and standard approaches solely rely on the context risk of incurring high regret. Similarly, some users might exhibit weaker correlations between subsequent states, leading to approaches that solely rely on state transitions risking the same. We analyze our setting and algorithm in terms of regret lower bound and upper bounds and evaluate our method on simulated medication adherence intervention data and several real-world data sets, showing improved empirical performance compared to several popular algorithms. 

Keywords
Multi-Armed-Bandit, Contextual Bandit, Sequential Decision Making, Markov property, Non-stationary
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52137 (URN)
Funder
Vinnova, 2017-04617
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2023-11-29Bibliographically approved
4. Information-gathering in latent bandits
Open this publication in new window or tab >>Information-gathering in latent bandits
2023 (English)In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 260, article id 110099Article in journal (Refereed) Published
Abstract [en]

In the latent bandit problem, the learner has access to reward distributions and – for the non-stationary variant – transition models of the environment. The reward distributions are conditioned on the arm and unknown latent states. The goal is to use the reward history to identify the latent state, allowing for the optimal choice of arms in the future. The latent bandit setting lends itself to many practical applications, such as recommender and decision support systems, where rich data allows the offline estimation of environment models with online learning remaining a critical component. Previous solutions in this setting always choose the highest reward arm according to the agent’s beliefs about the state, not explicitly considering the value of information-gathering arms. Such information-gathering arms do not necessarily provide the highest reward, thus may never be chosen by an agent that chooses the highest reward arms at all times.

In this paper, we present a method for information-gathering in latent bandits. Given particular reward structures and transition matrices, we show that choosing the best arm given the agent’s beliefs about the states incurs higher regret. Furthermore, we show that by choosing arms carefully, we obtain an improved estimation of the state distribution, and thus lower the cumulative regret through better arm choices in the future. Through theoretical analysis we show that the proposed method retains the sub-linear regret rate of previous methods while having much better problem dependent constants. We evaluate our method on both synthetic and real-world data sets, showing significant improvement in regret over state-of-the-art methods. © 2022 The Author(s). 

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2023
Keywords
Latent bandits, Information gathering, Non-stationary, Information directed sampling
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-49833 (URN)10.1016/j.knosys.2022.110099 (DOI)2-s2.0-85143522327 (Scopus ID)
Funder
Vinnova, 2017-04617
Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2023-11-29Bibliographically approved
5. Beyond Random Noise: Insights on Anonymization Strategies from a Latent Bandit Study
Open this publication in new window or tab >>Beyond Random Noise: Insights on Anonymization Strategies from a Latent Bandit Study
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This paper investigates the issue of privacy in a learning scenario where users share knowledge for a recommendation task. Our study contributes to the growing body of research on privacy-preserving machine learning and underscores the need for tailored privacy techniques that address specific attack patterns rather than relying on one-size-fits-all solutions. We use the latent bandit setting to evaluate the trade-off between privacy and recommender performance by employing various aggregation strategies, such as averaging, nearest neighbor, and clustering combined with noise injection. More specifically, we simulate a linkage attack scenario leveraging publicly available auxiliary information acquired by the adversary. Our results on three open real-world datasets reveal that adding noise using the Laplace mechanism to an individual user's data record is a poor choice. It provides the highest regret for any noise level, relative to de-anonymization probability and the ADS metric. Instead, one should combine noise with appropriate aggregation strategies. For example, using averages from clusters of different sizes provides flexibility not achievable by varying the amount of noise alone. Generally, no single aggregation strategy can consistently achieve the optimum regret for a given desired level of privacy.

Keywords
Latent-bandit, Privacy, Linkage-attack
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52138 (URN)
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2023-11-29Bibliographically approved

Open Access in DiVA

Thesis Fulltext(1041 kB)44 downloads
File information
File name FULLTEXT02.pdfFile size 1041 kBChecksum SHA-512
816aa6426bb3076897fc920ad6eea1c68e1a184e236a77e0f7c0d27f48e39142375811a888ca60dabfeef7e37f2ddad4ab638911aa44c18874248bdb75939af4
Type fulltextMimetype application/pdf

Authority records

Galozy, Alexander

Search in DiVA

By author/editor
Galozy, Alexander
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 114 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 826 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf