Project

Smart-Loop: Design of multi-modal human-machine-interaction system for keeping the driver in-the-loop in automated driving systems

Period
1 January–31 December 2020
Project manager
Pinar Boyraz Baykas

Purpose: Recent studies (Victor, T. et al, 2018) has suggested that the automation/automated driving systems should be either (i) clearly designed as assistance systems and the driver understands it is an assistance system (i.e. ADAS), (ii) robustly designed as fully-automated driving system and it does not rely on the driver’s supervisory control. In this pre-study, we would like to explore human-machine-interaction (HMI) dynamics and modes, focusing on haptic and motion cues, that can be developed in a clear and transparent way to be used in both situations where both type of systems (i) and (ii) are considered.

For system types (i), an ADAS is active and interactions between human-driver and ADAS should increase the context-awareness/situation-awareness of the driver as well as providing clear and understandable procedures in shared-control and/or collaborative driving. In such systems, the aim should be to establish collaboration while avoiding over-trust in automation.

For system types (ii), the aim is to engage driver/passengers in the driving context to reduce discomfort, motion sickness and have a clear/natural communication of the intent of the automated driving system. This can help acceptance of such systems.

Goals: In this project we will study various combinations of HMI modalities in different driving-contexts involving an (i) ADAS and (ii) fully-automated driving system. The aim is to explore the potential of haptic and kinesiologic feedback (i.e. motion cues) and the best combination of these modalities with audio-visual channels to achieve higher driver/passenger engagement and situation awareness.

Expected Results and Effects: The project in tandem with SoT-Multicue project will result in development of a multi-modal feedback system for increasing the ‘driver engagement’ in automated driving. Using the preliminary results from this SAFER Pre-study and SoT-Multicue project, we are planning to apply for an FFI/Vinnova project in 2020.

Implementation and Design: Work-packages (WP) distributed among the team members for matching SoT-Multicue project over 1 year. A multi-modal HMI subsystem will be first installed in Open-Desk-Simulator (ODS) located in building SAGA, having haptic, visual and auditory modes. Then, in the second phase of this project, the simulator in VTI will be used to explore the kinesiologic interactions where the simulator is able to produce certain amount of acceleration and jerk both in longitudinal and lateral directions.

Reference

Victor, T. Tivesten, E., Gustavsson, P., Johansson, J., Sandberg, F., Aust M.L., ‘Automation Expectation Mismatch: Incorrect Prediction Despite Eyes on Threat and Hands on Wheel’, Human Factors, Vol 60, no 8, Dec 2018, pp. 1095-1116.

Short facts

Research area
Road user behaviour
Financier(s)
SAFER Pre-Studies Phase 5
Partners
Chalmers University of Technology
VTI
Project type
SAFER Pre-study