Automating Workforce Scheduling with Large Language Models and Constraints
2025 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE credits
Student thesis
Abstract [en]
This thesis explores the use of large language models (LLMs) to automate workforce scheduling through natural language interaction. The primary objective is tofine-tune a general-purpose LLM to generate structured scheduling data in JSONformat from natural language prompts. Using a parameter-efficient fine-tuningmethod (LoRA), we trained Microsoft’s Phi-4 models on a domain-specific datasetof Swedish scheduling requests. The model’s performance was evaluated acrossvalidation, test, and generalization datasets using structured accuracy and fieldlevel metrics such as F1 score. The fine-tuned model achieved 84% structuredaccuracy on the validation set and 81.74% on a generalization test set featuringdiverse scheduling scenarios. In contrast to previous work that relied on few-shotprompting, our approach emphasizes reliable structure generation followed byconstraint checking through external Python functions. Comparative results showthat the fine-tuned Phi-4 model outperforms OpenAI’s GPT models in accuracy,though at the cost of generation time. These findings demonstrate the feasibilityand effectiveness of a fine-tuned, locally deployable LLM for reliable and interpretable schedule generation.
Place, publisher, year, edition, pages
2025. , p. 67
Keywords [en]
Workforce scheduling, Large Language Models, Phi-4, Fine-tuning, LoRA, Constraint validation, Artificial intelligence, Parameter efficient fine-tuning
National Category
Computer Sciences Artificial Intelligence
Identifiers
URN: urn:nbn:se:hh:diva-56516OAI: oai:DiVA.org:hh-56516DiVA, id: diva2:1971770
External cooperation
ClearQ AB
Subject / course
Computer science and engineering
Educational program
Computer Science and Engineering, 300 credits
Presentation
2025-05-20, R4341, Kristian IV:s väg 3, Halmstad, 23:22 (English)
Supervisors
Examiners
2025-06-182025-06-172025-10-01Bibliographically approved