Recognition of human behaviour within vehicles are becoming increasingly important. Paradoxically, the more control the car has (ie in terms of support systems), the more we need to know about the person behind the wheel [1] especially if he or she is expected to take over control from automation. A lot of focus has been devoted to research on the sensors monitoring the outside surroundings, but sensors on the inside has not received nearly as much attention. In terms of monitoring distractions, what is currently seen as dangerous (eg use of mobile phones) can in the future be seen as something good that helps to keep people awake in highly automated vehicles. Another reason for mapping activities inside the car is the often occurring mismatch between driver expectations and the reality of what today’s automated vehicles are capable of [2]. As long as the automation comes with limitations that impose a need for the driver to take over control at some point, it will be important to know more about what happens inside the vehicle. In this paper we describe the work performed within the ongoing DRAMA project1 to combine UX research with computer vision and machine learning to gather knowledge about what activities in a cabin can be mapped how they can be modelled to improve traffic safety and UX functionality.
Funding: Fordonsstrategisk forskning och innovation (FFI).