HUDs or Voice for Reading? – AutomotiveUI’19 Talk

Text Comprehension Teaser showing the task with heads-up display and the take-over scenario.

As preprints are popping up everywhere and the official conference program is also already online, I can proudly announce that yet another paper on productivity in automated vehicles got accepted. This time at the venue we love the most: the Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI. Happening this year in Utrecht, I will present another team-effort called

Clemens Schartmüller, Klemens Weigl, Philipp Wintersberger, Andreas Riener, and Marco Steinhauser. 2019. Text Comprehension: Heads-Up vs. Auditory Displays: Implications for a Productive Work Environment in SAE Level 3 Automated Vehicles. In 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’19), September 21–25, 2019, Utrecht, Netherlands. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3342197.3344547

Traditionally, related work would investigate either Heads-Up Displays (HUDs) or “Voice Assistants” for driving-related information, and compare them to traditional infotainment displays. In this work, we compare those “adapted interfaces” directly and for productive non-driving related tasks by executing a user study with objective performance, physiological, and self-rating measures in take-over scenarios. Further, we also investigate the question of what effects attentive user interface behavior can have in this context. The results show interesting contradictions between self-ratings and physiological measures on stress and workload, and probably (at first counterintuitive?) effects of attentive UI behavior in imminent take-over scenarios.

Want to know more? Make sure to attend the talk at AutomotiveUI’19 in Utrecht, The Netherlands in paper session 6 “Multimodal Interfaces” on Wednesday, September 25th, and/or contact me right now (using the contact form), or in September at the venue!

With increasing automation, vehicles could soon become “mobile offices” but traditional user interfaces (UIs) for office work are not optimized for this domain. We hypothesize that productive work will only be feasible in SAE level 3 automated vehicles if UIs are adapted to (A) the operational design domain, and (B) driver-workers’ capabilities. Consequently, we studied adapted interfaces for a typical office task
(text-comprehension) by varying display modality (heads-up reading vs. auditory listening), as well as UI behavior in conjunction with take-over situations (attention-awareness vs. no attention-awareness). Self-ratings, physiological indicators, and objective performance measures in a driving simulator study (N = 32) allowed to derive implications for a mobile workspace automated vehicle. Results highlight that heads-up displays promote sequential multi-tasking and thereby reduce workload and improve productivity in comparison to auditory displays, which were still more attractive to users. Attention-awareness led to reduced stress but later driving reactions, consequently requiring further investigations.

Abstract Text Comprehension: Heads-Up vs. Auditory Displays: Implications for a Productive Work Environment in SAE Level 3 Automated Vehicles by Schartmüller et al.