Hostname: page-component-848d4c4894-pjpqr Total loading time: 0 Render date: 2024-06-28T01:00:55.030Z Has data issue: false hasContentIssue false

ENGAGING END USERS IN AN AI-ENABLED SMART SERVICE DESIGN - THE APPLICATION OF THE SMART SERVICE BLUEPRINT SCAPE (SSBS) FRAMEWORK

Published online by Cambridge University Press:  27 July 2021

Fan Li*
Affiliation:
Eindhoven University of Technology
Yuan Lu
Affiliation:
Eindhoven University of Technology
*
Li, Fan, Eindhoven University of Technology, Industrial design, Netherlands, The, f.li@tue.nl

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Artificial Intelligence (AI) has expanded in a diverse context, it infiltrates our social lives and is a critical part of algorithmic decision-making. Adopting AI technology, especially AI-enabled design, by end users who are non-AI experts is still limited. The incomprehensible, untransparent decision-making and difficulty of using AI become obstacles which prevent these end users to adopt AI technology. How to design the user experience (UX) based on AI technologies is an interesting topic to explore.

This paper investigates how non-AI-expert end users can be engaged in the design process of an AI-enabled application by using a framework called Smart Service Blueprint Scape (SSBS), which aims to establish a bridge between UX and AI systems by mapping and translating AI decisions based on UX. A Dutch mobility service called ‘stUmobiel ’ was taken as a design case study. The goal is to design a reservation platform with stUmobiel end users. Co-creating with case users and assuring them to understand the decision-making and service provisional process of the AI-enabled design is crucial to promote users’ adoption. Furthermore, the concern of AI ethics also arises in the design process and should be discussed in a broader sense.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
The Author(s), 2021. Published by Cambridge University Press

References

Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y. and Kankanhalli, M. (2018), “Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda”, Conference on Human Factors in Computing Systems - Proceedings, available at: https://doi.org/10.1145/3173574.3174156.CrossRefGoogle Scholar
Airoldi, M., Beraldo, D. and Gandini, A. (2016), “Follow the algorithm: An exploratory investigation of music on YouTube”, Poetics, available at: https://doi.org/10.1016/j.poetic.2016.05.001.CrossRefGoogle Scholar
Bahrammirzaee, A. (2010), “A comparative survey of artificial intelligence applications in finance: Artificial neural networks, expert system and hybrid intelligent systems”, Neural Computing and Applications, available at: https://doi.org/10.1007/s00521-010-0362-z.CrossRefGoogle Scholar
Bathaee, Y. (2018), “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law & Technology.Google Scholar
Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E. and Winfield, A. (2020), The Ethics of Artificial Intelligence: Issues and Initiatives, European Parliamentary Research Service.Google Scholar
Carabantes, M. (2020), “Black-box artificial intelligence: an epistemological and critical analysis”, AI and Society, available at: https://doi.org/10.1007/s00146-019-00888-w.CrossRefGoogle Scholar
Castelvecchi, D. (2016), “Can we open the black box of AI?”, Nature, available at: https://doi.org/10.1038/538020a.CrossRefGoogle Scholar
Chapman, S., Fry, A., Deschenes, A. and McDonald, C.G. (2016), “Strategies to improve the user experience”, Serials Review, available at: https://doi.org/10.1080/00987913.2016.1140614.CrossRefGoogle Scholar
Combs, K., Fendley, M. and Bihl, T. (2020), “A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability”, Wseas Transactions on Computer Research, available at: https://doi.org/10.37394/232018.2020.8.9.CrossRefGoogle Scholar
Di Gangi, P.M. and Wasko, M. (2009), “The Co-Creation of Value: Exploring User Engagement in User-Generated Content Websites”, Sites the Journal Of 20Th Century Contemporary French Studies.Google Scholar
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M. and Kagal, L. (2019), “Explaining explanations: An overview of interpretability of machine learning”, Proceedings - 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA 2018, available at: https://doi.org/10.1109/DSAA.2018.00018.CrossRefGoogle Scholar
Goodman, B. and Flaxman, S. (2017), “European union regulations on algorithmic decision making and a ‘right to explanation’”, AI Magazine, available at: https://doi.org/10.1609/aimag.v38i3.2741.CrossRefGoogle Scholar
Haughey, C. (2019), “How to Improve UX With AI and Machine Learning | Springboard Blog”, available at: https://www.springboard.com/blog/improve-ux-with-ai-machine-learning/ (accessed 4 December 2020).Google Scholar
Hoof, , Van, J.. (2005), “Designing for older adults: principles and creative human factors approaches, by D. Fisk, A. Rogers, Neil Charness, Sara J. Czaja, Joseph Sharit; 2004”, Gerontechnology, available at: https://doi.org/10.4017/gt.2005.03.03.010.00.CrossRefGoogle Scholar
Kohler, T., Fueller, J., Matzler, K. and Stieger, D. (2011), “CO-creation in virtual worlds: The design of the user experience”, MIS Quarterly: Management Information Systems, available at: https://doi.org/10.2307/23042808.CrossRefGoogle Scholar
Liao, Q.V., Gruen, D. and Miller, S. (2020), “Questioning the AI: Informing Design Practices for Explainable AI User Experiences”, Conference on Human Factors in Computing Systems - Proceedings, available at: https://doi.org/10.1145/3313831.3376590.CrossRefGoogle Scholar
Li, F., Lu, Y. and Hands, D. (2020), “Designing for an AI-enabled smart service adoption from a user experience perspective”, Design Management Institute, pp. 37.Google Scholar
Linden, G., Smith, B. and York, J. (2003), “Amazon.com recommendations: Item-to-item collaborative filtering”, IEEE Internet Computing, available at: https://doi.org/10.1109/MIC.2003.1167344.CrossRefGoogle Scholar
Lipton, Z.C. (2018), “The Mythos of Model Interpretability”, Queue, available at: https://doi.org/10.1145/3236386.3241340.CrossRefGoogle Scholar
Marr, B. (2019), “Why Every Company Needs An Artificial Intelligence (AI) Strategy For 2019”, Forbes, available at: https://www.forbes.com/sites/bernardmarr/2019/03/21/why-every-company-needs-an-artificial-intelligence-ai-strategy-for-2019/?sh=4a62e03e68ea (accessed 18 November 2020).Google Scholar
Nicolaou, A.I., Ibrahim, M. and Van Heck, E. (2013), “Information quality, trust, and risk perceptions in electronic data exchanges”, Decision Support Systems, available at:https://doi.org/10.1016/j.dss.2012.10.024.CrossRefGoogle Scholar
Panesar, A. (2019), Machine Learning and AI for Healthcare, Machine Learning and AI for Healthcare, available at: https://doi.org/10.1007/978-1-4842-3799-1.CrossRefGoogle Scholar
Peek, S.T.M., Wouters, E.J.M., Luijkx, K.G. and Vrijhoef, H.J.M. (2016), “What it Takes to successfully implement technology for aging in place: Focus groups with stakeholders”, Journal of Medical Internet Research, available at: https://doi.org/10.2196/jmir.5253.CrossRefGoogle Scholar
Prahalad, C.K. and Ramaswamy, V. (2004), “Co-creating unique value with customers”, Strategy & Leadership, available at: https://doi.org/10.1108/10878570410699249.CrossRefGoogle Scholar
Rai, A. (2020), “Explainable AI: from black box to glass box”, Journal of the Academy of Marketing Science, available at: https://doi.org/10.1007/s11747-019-00710-5.CrossRefGoogle Scholar
Ribera, M. and Lapedriza, A. (2019), “Can we do better explanations? A proposal of user-centered explainable AI”, CEUR Workshop Proceedings.Google Scholar
Samek, W. and Müller, K.R. (2019), “Towards Explainable Artificial Intelligence”, Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), available at: https://doi.org/10.1007/978-3-030-28954-6_1.CrossRefGoogle Scholar
Samek, W., Wiegand, T. and Müller, K.R. (2017), “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models”, ArXiv.Google Scholar
Sareen, S., Saltelli, A. and Rommetveit, K. (2020), “Ethics of quantification: illumination, obfuscation and performative legitimation”, Palgrave Communications, available at: https://doi.org/10.1057/s41599-020-0396-5.CrossRefGoogle Scholar
Scandurra, I. and Sjölinder, M. (2013), “Participatory design with seniors: Design of future services and iterative refinements of interactive eHealth services for old citizens”, Journal of Medical Internet Research, available at: https://doi.org/10.2196/med20.2729.CrossRefGoogle Scholar
Shin, D., Zhong, B. and Biocca, F.A. (2020), “Beyond user experience: What constitutes algorithmic experiences?”, International Journal of Information Management, Elsevier Ltd, Vol. 52, p. 102061.10.1016/j.ijinfomgt.2019.102061CrossRefGoogle Scholar
Shin, F.P.D. (2020), “The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI”, International Journal of Human-Computer Studies, available at: https://doi.org/10.1016/j.ijhcs.2020.102551.CrossRefGoogle Scholar
Wang, D., Yang, Q., Abdul, A. and Lim, B.Y. (2019), “Designing theory-driven user-centric explainable AI”, Conference on Human Factors in Computing Systems - Proceedings, available at: https://doi.org/10.1145/3290605.3300831.CrossRefGoogle Scholar
Ye, H., Liang, L., Li, G.Y., Kim, J., Lu, L. and Wu, M. (2018), “Machine Learning for Vehicular Networks: Recent Advances and Application Examples”, IEEE Vehicular Technology Magazine, available at: https://doi.org/10.1109/MVT.2018.2811185.CrossRefGoogle Scholar