Publication

Multimodal Data for Road User Behavior Analysis - Final Report

Recognizing driver behaviors dynamically in safety modeling is a challenging problem since it involves various feature parameters about the driver, the car and the ambient traffic. The goal is to cateogrize driver behaviours into a standard metric that can be used to score safety contexts. In this project, we adopt a broad definition of context that abstracts multimodal data into aggregated driving behaviours (Biondi, Strayer, Rossi, Gastaldi, & Mulatti, 2017). Raw data span the driver and the vehicle dimensions, to formulate multiple levels of context abstraction that reveal higher level and decision-leading features.
Raw data used to detect driver distraction employ onboard cameras and other physiological sensors (Nees, 2021). Our project partner houses some levels of these solutions that we investigated and proposed to optimize further. Along the vehicle dimension, raw data sources is expanded with CAN bus data, such as Lidar and GPS data, which can be fused in different ways to infer driving feature conditions, such as speed, acceleration, lane keeping or changing instances, as well as car-following gaps. In doing so, driver state features that are inherent to individual drivers’ physiological attributes are consolidated. Data fusion processes incorporate vehicle information, such as lane deviation and steering wheel motion to better diagnose driving contexts that are used to estimate risk levels. Further integration can involve ambient traffic modality such as the volume of ambient vehicles, as well as other sources of distraction such as cognitive distraction. However, these latter considerations are outside the scope of this project.
We introduce the concept of driving analytics that employs data-evidenced and AI-grounded methodologies to optimize the precision of context detection, used to assert risk levels. On- board sensors, such as eye-tracking cameras analyze driver’s visual distraction, through extracting features based on some image analysis processes. The application of some machine learning techniques categorizes driver states into distraction patterns. Similarly, we proposed to integrate vehicle indicators to recognize driving patterns. Subsequently, driver state and vehicle dynamics are fusionned to obtain a higher-precision risk indicator, used by ADAS software to monitor and assist drivers.
This project presents a proof-of-concept to characterize contexts using a multimodal perspective of data. In doing so, we report and categorize the extensive state-of-the-art works related to driver modeling (Hermannstädter & Yang, 2013), and we outline the scope of a follow-up larger-scale project, which proposal has already been drafted and submitted.

Author(s)
Yacine Atif
Research area
Road user behaviour
Publication type
Project report
Year of publication
2021
Document
FinalReport_FP07.pdf (986.21 KB)