Hostname: page-component-7c8c6479df-94d59 Total loading time: 0 Render date: 2024-03-19T03:15:24.437Z Has data issue: false hasContentIssue false

Automated areas of interest analysis for usability studies of tangible screen-based user interfaces using mobile eye tracking

Published online by Cambridge University Press:  11 September 2020

M. Batliner*
Affiliation:
Product Development Group Zurich, ETH Zurich, Leonhardstr. 21, 8092Zürich, Switzerland
S. Hess
Affiliation:
Product Development Group Zurich, ETH Zurich, Leonhardstr. 21, 8092Zürich, Switzerland
C. Ehrlich-Adám
Affiliation:
Product Development Group Zurich, ETH Zurich, Leonhardstr. 21, 8092Zürich, Switzerland
Q. Lohmeyer
Affiliation:
Product Development Group Zurich, ETH Zurich, Leonhardstr. 21, 8092Zürich, Switzerland
M. Meboldt
Affiliation:
Product Development Group Zurich, ETH Zurich, Leonhardstr. 21, 8092Zürich, Switzerland
*
Author for correspondence: Martin Batliner, E-mail: martibat@ethz.ch

Abstract

The user's gaze can provide important information for human–machine interaction, but the analysis of manual gaze data is extremely time-consuming, inhibiting wide adoption in usability studies. Existing methods for automated areas of interest (AOI) analysis cannot be applied to tangible products with a screen-based user interface (UI), which have become ubiquitous in everyday life. The objective of this paper is to present and evaluate a method to automatically map the user's gaze to dynamic AOIs on tangible screen-based UIs based on computer vision and deep learning. This paper presents an algorithm for automated Dynamic AOI Mapping (aDAM), which allows the automated mapping of gaze data recorded with mobile eye tracking to the predefined AOIs on tangible screen-based UIs. The evaluation of the algorithm is performed using two medical devices, which represent two extreme examples of tangible screen-based UIs. The different elements of aDAM are examined for accuracy and robustness, as well as the time saved compared to manual mapping. The break-even point for an analyst's effort for aDAM compared to manual analysis is found to be 8.9 min gaze data time. The accuracy and robustness of both the automated gaze mapping and the screen matching indicate that aDAM can be applied to a wide range of products. aDAM allows, for the first time, automated AOI analysis of tangible screen-based UIs with AOIs that dynamically change over time. The algorithm requires some additional initial input for the setup and training, but analyzed gaze data duration and effort is only determined by computation time and does not require any additional manual work thereafter. The efficiency of the approach has the potential for a broader adoption of mobile eye tracking in usability testing for the development of new products and may contribute to a more data-driven usability engineering process in the future.

Type
Research Article
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Canny, J (1986) A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI 8, 679698.CrossRefGoogle ScholarPubMed
De Beugher, S, Ichiche, Y, Brône, G and Goedemé, T (2012) Automatic analysis of eye-tracking data using object detection algorithms. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing - UbiComp ‘12. New York, New York, USA: ACM Press, pp. 67.Google Scholar
Duchowski, A (2007) Eye Tracking Methodology. London: Springer London.Google Scholar
Essig, K, Sand, N, Schack, T, et al. (2010) Fully-automatic annotation of scene videos: establish eye tracking effectively in various industrial applications. SICE Annual Conference 2010. Piscataway, New Jersey, US: IEEE, pp. 33043307.Google Scholar
Garcia-Garcia, A, Orts-Escolano, S, Oprea, S, et al. (2018) A survey on deep learning techniques for image and video semantic segmentation. Applied Soft Computing 70, 4165.CrossRefGoogle Scholar
Grompone von Gioi, R, Jakubowicz, J, Morel, J-M, et al. (2012) LSD: a line segment detector. Image Processing On Line 2, 3555.CrossRefGoogle Scholar
Hartley, R and Zisserman, A (2003) Multiple View Geometry in Computer Vision. Cambridge, UK: Cambridge University Press.Google Scholar
He, K, Gkioxari, G, Dollar, P, et al. (2017) Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway, New Jersey, US: IEEE, pp. 2980–2988.CrossRefGoogle Scholar
Holmqvist, K and Andersson, R (2017) Eyetracking: A Comprehensive Guide to Methods, Paradigms and Measures. Lund, Sweden: Lund Eye-Tracking Research Institute.Google Scholar
Kiefer, P, Giannopoulos, I, Kremer, D, et al. (2014) Starting to get bored. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ‘14. New York, NY, USA: ACM Press, pp. 315–318.Google Scholar
Kurzhals, K, Hlawatsch, M, Seeger, C, et al. (2017) Visual analytics for mobile eye tracking. IEEE Transactions on Visualization and Computer Graphics 23, 301310.CrossRefGoogle ScholarPubMed
Lepetit, V and Fua, P (2005) Monocular model-based 3D tracking of rigid objects: a survey. Foundations and Trends® in Computer Graphics and Vision 1, 189.CrossRefGoogle Scholar
Leutenegger, S, Chli, M and Siegwart, RY (2011) BRISK: Binary Robust invariant scalable keypoints. 2011 International Conference on Computer Vision. Piscataway, New Jersey, US: IEEE, pp. 2548–2555.CrossRefGoogle Scholar
Lin, T-Y, Maire, M, Belongie, S, et al. (2014) Microsoft COCO: common objects in context. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 740–755.CrossRefGoogle Scholar
Lohmeyer, Q, Schneider, A, Jordi, C, et al. (2019) Toward a new age of patient centricity? The application of eye-tracking to the development of connected self-injection systems. Expert Opinion on Drug Delivery 16, 163175.CrossRefGoogle Scholar
Lowe, DG (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91110.CrossRefGoogle Scholar
Lucas, BD and Kanade, T (1981) An iterative image registration technique with an application to stereo vision. IJCAI81. San Francisco, CA, US: Morgan Kaufmann Publishers Inc., pp. 674–679.Google Scholar
MacInnes, J (2018) Wearable eye-tracking for research: automated dynamic gaze mapping and accuracy/precision comparisons across devices. bioRxiv.CrossRefGoogle Scholar
Malan, KM, Eloff, JHP and De Bruin, JA (2018) Semi-automated usability analysis through eye tracking. South African Computer Journal 30, 6684.CrossRefGoogle Scholar
Mansouryar, M, Steil, J, Sugano, Y and Bulling, A ((2016) 3D gaze estimation from 2D pupil positions on monocular head-mounted eye trackers. In ETRA '16: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. New York, NW, US: Association for Computing Machinery, pp. 197200.CrossRefGoogle Scholar
Moons, A, Van Gool, A and Vergauwene, A (2008) 3D Reconstruction from multiple images. Part 1: principles. Foundations and Trends in Computer Graphics and Vision 4, 287404.CrossRefGoogle Scholar
Mussgnug, M, Waldern, MF and Meboldt, M (2017) Mobile eye tracking in usability testing: designers analysing the user–product interaction. International Conference on Engineering Design. Glasgow, Scotland: Design Society.Google Scholar
Ren, S, He, K, Girshick, R, et al. (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 11371149.CrossRefGoogle ScholarPubMed
Saluja, KS, Jeevithashree, D, Arjun, S, et al. (2019) Analyzing eye gaze of users with different reading abilities due to learning disability. International Conference on Graphics and Signal Processing. New York NY US: Association for Computing Machinery.Google Scholar
Toyama, T, Kieninger, T, Shafait, F, et al. (2012) Gaze guided object recognition using a head-mounted eye tracker. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’12. New York, NY, USA: ACM Press, p. 91.Google Scholar
Vansteenkiste, P, Cardon, G, Philippaerts, R, et al. (2015) Measuring dwell time percentage from head-mounted eye-tracking data – comparison of a frame-by-frame and a fixation-by-fixation analysis. Ergonomics 58, 712721.CrossRefGoogle Scholar
Wolf, J, Hess, S, Bachmann, D, et al. (2018) Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research 11, 111.Google Scholar
Zhang, Y, Zheng, X, Hong, W, et al. (2015) A comparison study of stationary and mobile eye tracking on EXITs design in a wayfinding system. 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). Piscataway, New Jersey, US: IEEE, pp. 649–653.CrossRefGoogle Scholar