Hostname: page-component-848d4c4894-pjpqr Total loading time: 0 Render date: 2024-06-24T18:11:53.657Z Has data issue: false hasContentIssue false

From libertarian paternalism to liberalism: behavioural science and policy in an age of new technology

Published online by Cambridge University Press:  13 December 2021

Dario Krpan*
Affiliation:
London School of Economics and Political Science, Department of Psychological and Behavioural Science, London, UK
Milan Urbaník
Affiliation:
London School of Economics and Political Science, Department of Psychological and Behavioural Science, London, UK
*
*Correspondence to: E-mail: d.krpan@lse.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Behavioural science has been effectively used by policy makers in various domains, from health to savings. However, interventions that behavioural scientists typically employ to change behaviour have been at the centre of an ethical debate, given that they include elements of paternalism that have implications for people's freedom of choice. In the present article, we argue that this ethical debate could be resolved in the future through implementation and advancement of new technologies. We propose that several technologies which are currently available and are rapidly evolving (i.e., virtual and augmented reality, social robotics, gamification, self-quantification, and behavioural informatics) have a potential to be integrated with various behavioural interventions in a non-paternalistic way. More specifically, people would decide themselves which behaviours they want to change and select the technologies they want to use for this purpose, and the role of policy makers would be to develop transparent behavioural interventions for these technologies. In that sense, behavioural science would move from libertarian paternalism to liberalism, given that people would freely choose how they want to change, and policy makers would create technological interventions that make this change possible.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press

Introduction

Behavioural science interventions have been implemented in various policy areas, from health and education to justice and sustainability, and used to influence behaviours such as pension savings, tax compliance, or healthy food consumption, to name but a few (e.g., Oliver, Reference Oliver2013, Reference Oliver2019; Halpern, Reference Halpern2015; Sunstein, Reference Sunstein, Reisch and Thøgersen2015, Reference Sunstein2020; Sanders et al., Reference Sanders, Snijders and Hallsworth2018). Although these interventions are highly diverse and can be based on different theoretical assumptions, an underlying characteristic they share is that they influence behaviour by changing the ‘architecture’ of the context in which people act (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Vlaev et al., Reference Vlaev, King, Dolan and Darzi2016; Mongin & Cozic, Reference Mongin and Cozic2018). For example, this may involve altering the order of foods in a cafeteria, changing how the information a person considers when deciding is framed, exposing people to a scent before they are about to act, etc. (de Lange et al., Reference De Lange, Debets, Ruitenburg and Holland2012; Marteau et al., Reference Marteau, Hollands and Fletcher2012).

Interventions that behavioural scientists use are typically linked to the concept of libertarian paternalism (Sunstein & Thaler, Reference Sunstein and Thaler2003; Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Hansen, Reference Hansen2016; Oliver, Reference Oliver2019). Paternalism in this context means that the interventions are aimed to influence people's behaviour in a specific direction, and this behavioural change should be welfare promoting and thus make people ‘better off’ according to some criterion that is established as objectively as possible (Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Sunstein, Reference Sunstein2014). Although it is not plausible that all behavioural science interventions are designed or applied to make people ‘better off’, which means that they can, in principle, be inconsistent with paternalism, they should not violate this principle when ethically applied (Lades & Delaney, Reference Lades and Delaney2020). Proponents of libertarian paternalism argue that, despite being paternalistic, behavioural interventions are aligned with liberalism (Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Sunstein, Reference Sunstein2014), which broadly refers to respecting people's freedom of choice (Gane, Reference Gane2021). For example, it is claimed that these interventions respect the freedom because, unlike prohibitions or bans, changing the ‘architecture’ of the context in which people act does not forbid an action or take any choice options away from them; people, therefore, remain free to select whatever course of action they desire (Thaler & Sunstein, Reference Thaler and Sunstein2008).

However, despite its emphasis on the freedom of choice, libertarian paternalism has faced several criticisms that have argued it is not compatible with liberalism for various reasons (Alberto & Salazar, Reference Alberto and Salazar2012; Gill & Gill, Reference Gill and Gill2012; Grüne-Yanoff, Reference Grüne-Yanoff2012; Heilmann, Reference Heilmann2014; Rebonato, Reference Rebonato2014; Barton & Grüne-Yanoff, Reference Barton and Grüne-Yanoff2015; Mongin & Cozic, Reference Mongin and Cozic2018; Le Grand, Reference Le Grand2020; Reijula & Hertwig, Reference Reijula and Hertwig2020). First, interventions aligned with libertarian paternalism interfere in choice processes and hence limit negative freedom, which involves freedom from interference by other people (Grüne-Yanoff, Reference Grüne-Yanoff2012; Gane, Reference Gane2021). Second, these interventions are frequently not transparent, which means that people may not understand how they operate, in which direction they should change their behaviour, and/or to what degree they are supported by sound scientific evidence (Grüne-Yanoff, Reference Grüne-Yanoff2012; Barton & Grüne-Yanoff, Reference Barton and Grüne-Yanoff2015). People's freedom of choice is, therefore, limited because they lack the information about how they are being influenced and why, and hence they cannot deliberate on this information to make a choice. Third, libertarian paternalism does not respect the subjectivity or plurality of values, which in a nutshell means that it endorses changing behaviours in a specific direction that is considered welfare promoting (e.g., eating healthy or being physically active), rather than respecting people's individual freedoms by changing behaviour in line with ‘the values that individuals have determined as their values’ (Grüne-Yanoff, Reference Grüne-Yanoff2012, p. 641). To resolve these impediments to freedom, the critics of libertarian paternalism have proposed that behavioural interventions should be devised to promote people's capability to make their own choices (i.e., boosting) rather than nudging them to act in a particular direction (Hertwig & Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017).

In the present article, we look at this issue from an alternative perspective. We argue that one of the possible solutions to making behavioural interventions more compatible with liberalism is integrating them with cutting edge developments in technology. More specifically, there are various promising technological tools from different domains (e.g., social robotics, self-quantification, etc.) that have either already been used or could potentially be used to implement behavioural change techniques. Importantly, administering behavioural interventions via these technologies would require that people deliberately choose which behaviour(s) they want to change (if any) and select the desired technological tool(s) and intervention(s) for this purpose. Also, transparency could be ensured by creating a summary for potential users regarding how each intervention operates, in which direction it should change their behaviour, and to what degree it is supported by sound scientific evidence. Overall, this approach would be consistent with liberalism because it would ensure negative freedom, transparency, and the freedom to select interventions and desired behaviours to change in line with one's values and beliefs.

In this article, we first overview the technological domains we find compatible with behavioural interventions and examine both the interventions that have already been implemented within these domains and the potential they have for future integration with behavioural change techniques. We then explore whether knowing how the interventions operate and the behaviours they target would be an obstacle to the effectiveness of combining cutting edge technologies with behavioural science. Finally, we discuss new ethical issues that could arise because of this approach, and we address additional policy considerations. To aid the interpretation of the article, in Table 1, we overview the technologies we cover and their potential for behaviour change.

Table 1. The Overview of the New Technologies Covered in the Present Article and their Potential for Behaviour Change

Behavioural Science in an Age of New Technology

Virtual and Augmented Reality

Introducing the technological domain

Virtual reality (VR) and augmented reality (AR) share one main characteristic – they can alter the visual environment in which people act. The main difference is that VR immerses people into a virtual world inside a VR headset (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016), whereas AR changes people's actual physical environment by projecting holograms onto it (Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019). For example, by using the VR headset, we can immerse ourselves into a virtual world in which we assume the appearance of the older version of ourselves (Hershfield et al., Reference Hershfield, Goldstein, Sharpe, Fox, Yeykelis, Carstensen and Bailenson2011), whereas AR glasses can project virtual material objects or beings into the space around us, thus blending the virtual and physical world into one (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016). Whereas VR headsets such as Oculus Rift, HTC Vive, or Google Daydream View are relatively affordable and tend to be widely used, AR glasses such as Microsoft Hololens or Magic Leap are still not easily affordable for most individuals and tend to be used by large organizations and research labs (Elmqaddem, Reference Elmqaddem2019; Xue et al., Reference Xue, Sharma and Wild2019).

Theoretical argument and available evidence

The main benefit of VR and AR regarding behaviour change is that they can directly alter the visual context of action. A theoretical paradigm that supports the effectiveness of these technologies is construal level theory (CLT). According to CLT, one of the reasons why people sometimes fail to act is that the consequences or circumstances of action are too psychologically distant (Spence et al., Reference Spence, Poortinga and Pidgeon2012; Kim et al., Reference Kim, Schnall and White2013; Jones et al., Reference Jones, Hine and Marks2017; Touré-Tillery & Fishbach, Reference Touré-Tillery and Fishbach2017; Chu & Yang, Reference Chu and Yang2018; Kogut et al., Reference Kogut, Ritov, Rubaltelli and Liberman2018; Simonovits et al., Reference Simonovits, Kezdi and Kardos2018; Brügger, Reference Brügger2020). That is, the action may concern some event that will not happen immediately, a person that is not close to us, or a place that is not near us. For example, people may not recycle because climate change feels far away, they may not attempt to reduce their prejudice because they do not know what it feels like to be the target of the prejudice, or they may not bother donating to charity because the benefactor is from a distant country. Construal level theory posits that reducing psychological distance to these events, circumstances, or individuals by making them more concrete can propel action, given that concreteness is both more emotionally arousing and may activate various motivational mechanisms that propel behaviour (Van Boven et al., Reference Van Boven, Kane, McGraw and Dale2010; Bruyneel & Dewitte, Reference Bruyneel and Dewitte2012; Kim et al., Reference Kim, Schnall and White2013). This is exactly what AR or VR can achieve: for example, they can visually simulate the consequences of climate change in one's current environment or transform people into a person they are prejudiced against, thus making action more likely (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016).

In accordance with these theoretical paradigms, effectiveness of VR in changing behaviour has been empirically supported in numerous domains, including pension savings (Hershfield et al., Reference Hershfield, Goldstein, Sharpe, Fox, Yeykelis, Carstensen and Bailenson2011), prejudice and bias reduction (Banakou et al., Reference Banakou, Hanumanthu and Slater2016, Reference Banakou, Kishore and Slater2018), sustainability and environment (Bailey et al., Reference Bailey, Bailenson, Flora, Armel, Voelker and Reeves2015; Nelson et al., Reference Nelson, Anggraini and Schlüter2020), prosocial behaviour (Rosenberg et al., Reference Rosenberg, Baughman and Bailenson2013), domestic violence (Seinfeld et al., Reference Seinfeld, Arroyo-Palacios, Iruretagoyena, Hortensius, Zapata, Borland, de Gelder, Slater and Sanchez-Vives2018), parenting (Hamilton-Giachritsis et al., Reference Hamilton-Giachritsis, Banakou, Quiroga, Giachritsis and Slater2018), physical activity (Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019), etc. As an example, embodying white individuals into a virtual body of a black person reduced their racial prejudice (Banakou et al., Reference Banakou, Hanumanthu and Slater2016). A systematic literature review by Lanier et al. (Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019) has shown that, even if VR research is still in its early stages and the quality of studies generally needs to improve, those studies that have been conducted so far have a good evidential value and indicate that VR interventions may effectively change psychological and behavioural outcomes. However, the studies have several main disadvantages. First, they are mostly lab studies, and it is therefore not known to what extent VR can change behaviours in the real world. Second, the studies typically involve short-term effects, which means that the impact of VR on behaviour is assessed immediately after the interventions or up to one week later at most, but it is not known whether they can create a sustained behaviour change. Finally, the sample sizes are generally small (34 participants per condition on average; Lanier et al., Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019), which means that the magnitude of behaviour change observed cannot be estimated with precision. Therefore, to reveal a full potential of VR in behavioural change, researchers will need to focus on field studies that examine long-term effects using larger sample sizes.

In contrast to VR, very few studies have examined the impact of AR on behaviour, given that this technology is not yet as widely used as VR. Therefore, although no well-informed conclusions can be made in this regard, researchers agree that this technological innovation has a large untapped potential for behaviour change (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016; Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019), as we illustrate in the next section.

Future potential

Given that VR is already widely used, its potential applications in behavioural public policy will largely depend on the degree to which behavioural scientists adopt this technology, design interventions for it, and test them. Currently, most research regarding VR and behaviour change has been conducted outside the realm of behavioural science (see Lanier et al. (Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019) and the studies reviewed above). For example, most interventions are not grounded in theories and approaches of behaviour change (e.g., Michie et al., Reference Michie, Van Stralen and West2011) and/or do not use behavioural science intervention techniques such as defaults, salience, framing, norms, and simplification of complex choices (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Loewenstein & Chater, Reference Loewenstein and Chater2017; Oliver, Reference Oliver2019). In this regard, we recommend that behavioural scientists interested in policy examine VR as a tool for influencing behaviour and focus on developing VR-based interventions informed by behavioural principles.

Although AR has so far not been comprehensively researched regarding behavioural interventions, we posit that it has an even greater potential for changing behaviour than VR because it can directly alter the environment in which people act. To illustrate this potential, let us imagine a scenario in which a person has decided to eat more vegetables, and fewer sweets and chocolate. In that case, AR equipment could be programmed to recognize sweets or chocolate in real-time, even before the person consciously detects them. Then, it could redirect the person's attention into another direction, distract the person with sounds or colours, hide the sweets by altering the visual environment, make the sweets appear disgusting (e.g., by creating the hologram of a worm entering the sweets), or produce verbal prompts or sounds to discourage consumption. On the other hand, the equipment could also be programmed to recognize vegetables in real time and make them salient or visually more appealing, produce verbal prompts or sounds to encourage consumption, etc. In other words, AR has a potential to dynamically implement numerous behavioural tools and principles in real time. Whereas the capacity of AR to fulfil this potential will greatly depend on further technological developments, and it may take another 5–10 years before this tool reaches the adequate level of usability and adoption, behavioural scientists can already set the stage for this by devising AR-based interventions and testing them.

Social Robotics

Introducing the technological domain

Social robots are autonomous or semi-autonomous agents that communicate and interact with people, imitating closely human behaviour, looks, and/or emotional expressions (Broadbent, Reference Broadbent2017). These robots are typically designed to behave according to the norms expected by the individuals they interact with (Bartneck & Forlizzi, Reference Bartneck and Forlizzi2004). Simply put, social robots are not user-friendly computers that operate as machines; rather, they are user-friendly computers that operate as humans (Zhao, Reference Zhao2006). They are made to interact with humans as helpers and artificial companions in hospitals, schools, homes, or social care facilities (Broadbent, Reference Broadbent2017; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018). Some examples of social robots include Nao Humanoid Robot, which can perform various human-like functionalities such as dancing, walking, speaking, or recognizing faces and objects, and Alyx, who teaches people with autism how to recognize emotional cues. An additional subcategory of social robotics is robopets – robots that appear and behave like companion animals, such as Aibo-dog (Eachus, Reference Eachus2001; Abbott et al., Reference Abbott, Orr, McGill, Whear, Bethel, Garside, Stein and Thompson-Coon2019). Importantly, social robots do not necessarily need to resemble living beings like humans or pets – it is sufficient that they can verbally communicate with people in a human-like manner (Broadbent, Reference Broadbent2017).

Theoretical argument and available evidence

Several lines of argument indicate that social robots could effectively change behaviour in the form of messengers (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012) who prompt people to undertake a certain behaviour of interest. First, these robots can be programmed to possess characteristics of effective messengers, including credibility, trust, and empathy (Reeves et al., Reference Reeves, Wise, Maldonado, Kogure, Shinozawa and Naya2003; Cialdini & Cialdini, Reference Cialdini and Cialdini2007; Looije et al., Reference Looije, Neerincx and Cnossen2010, Reference Looije, van der Zalm, Neerincx and Beun2012; Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Seo et al., Reference Seo, Geiskkovitch, Nakane, King and Young2015). Second, they can positively impact self-efficacy (Matsuo et al., Reference Matsuo, Miki, Takeda and Kubota2015; El Kamali et al., Reference El Kamali, Angelini, Caon, Carrino, Röcke, Guye, Rizzo, Mastropietro, Sykora, Elayan and Kniestedt2020) and intrinsic motivation (Fasola & Matarić, Reference Fasola and Mataric2012) as highly important factors when it comes to initiating and maintaining behaviour change (Bandura, Reference Bandura1997; Ryan & Deci, Reference Ryan and Deci2000). Third, relative to humans, social robots may be less likely to evoke psychological reactance – a motivational state characterized by anger that can occur when people are asked to change their behaviour but react against it because they feel their freedom of action has been undermined (Brehm, Reference Brehm1966; Brehm & Brehm, Reference Brehm and Brehm2013). Social agency theory posits that people are more likely to experience psychological reactance as the social agency of the messenger increases (i.e., the more the messenger is characterized by human-like social cues, including human-like face and head movements, facial expressions, affective intonation of speech, etc.; Roubroeks et al., Reference Roubroeks, Ham and Midden2011; Ghazali et al., Reference Ghazali, Ham, Barakova and Markopoulos2018). Although social robots are similar to humans, they are not humans and therefore have lower social agency in comparison. People may thus find robot messengers less threatening to their autonomy than other humans and experience lower reactance in response to prompts delivered by them. In this regard, an opposite argument can also be made because some people may dislike interacting with robots due to the lack of human connection (e.g., Nomura et al., Reference Nomura, Kanda and Suzuki2006), which might impede their effectiveness as messengers. However, there is currently no theoretical or empirical support for this premise, especially because there are many situations where people prefer robots over other humans (Broadbent, Reference Broadbent2017; Granulo et al., Reference Granulo, Fuchs and Puntoni2019).

Despite the outlined theoretical arguments, the capacity of social robots to positively impact behaviour as messengers has rarely been investigated. These robots have primarily been studied as assistants in the domains on education, elderly care, and treatment of autism spectrum disorders (Abdi et al., Reference Abdi, Al-Hindawi, Ng and Vizcaychipi2018; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018; Robinson et al., Reference Robinson, Cottier and Kavanagh2019). In this regard, they were shown to improve children's experiences of learning and the learning outcomes (Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018); to beneficially influence wellbeing, cognition, and physical health of the elderly (Abdi et al., Reference Abdi, Al-Hindawi, Ng and Vizcaychipi2018); and to enhance the learning of social skills for patients suffering from autism spectrum disorders (Pennisi et al., Reference Pennisi, Tonacci, Tartarisco, Billeci, Ruta, Gangemi and Pioggia2016). Although few studies have been conducted on whether social robots can change behaviour via messages or prompts, which is of interest to behavioural public policy (Oliver, Reference Oliver2013), these studies showed promising findings (Casaccia et al., Reference Casaccia, Revel, Scalise, Bevilacqua, Rossi, Paauwe, Karkowsky, Ercoli, Serrano, Suijkerbuijk and Lukkien2019; Tussyadiah & Miller, Reference Tussyadiah and Miller2019; Mehenni et al., Reference Mehenni, Kobylyanskaya, Vasilescu, Devillers, D'Haro, Callejas and Nakamura2020; Robinson et al., Reference Robinson, Connolly, Hides and Kavanagh2020). For example, Robinson et al. (Reference Robinson, Connolly, Hides and Kavanagh2020) provided preliminary evidence that motivational messages communicated by a robot can reduce consumption of unhealthy snacks.

Future potential

Several authors have argued that social robots should be used to administer interventions aimed at influencing various behaviours that are beneficial to society, ranging from charitable giving to pro-environmental behaviour (Borenstein & Arkin, Reference Borenstein and Arkin2017; Sequeira, Reference Sequeira2018; Tussyadiah & Miller, Reference Tussyadiah and Miller2019; Rodogno, Reference Rodogno, Seibt, Hakli and Nørskov2020). Developments in this regard will be driven by the efforts policy makers invest to create the appropriate messaging interventions that can be implemented by social robots. Indeed, social robots are currently widely available and many of them are relatively affordable (Broadbent, Reference Broadbent2017; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018); the lack of behavioural interventions devised for this technological tool can, therefore, primarily be explained by the fact that very little research has been done to create and test such interventions. In addition, the effectiveness of social robots as messengers will depend on future advancements in their design, given that the degree to which they are interactive may improve intervention success (Bartneck et al., Reference Bartneck, Nomura, Kanda, Suzuki and Kennsuke2005; Song & Luximon, Reference Song and Luximon2020). The design is also crucial to overcome one of the main potential issues in human–robot interaction known as uncanny valley – a phenomenon according to which robots who are similar to humans but have certain details that are strikingly non-human can cause eeriness and revulsion (Mathur & Reichling, Reference Mathur and Reichling2016; Ciechanowski et al., Reference Ciechanowski, Przegalinska, Magnuski and Gloor2019; Kätsyri et al., Reference Kätsyri, de Gelder and Takala2019).Footnote 1 Lastly, a broad adoption of social robots in administering behavioural interventions may depend on whether these robots and the interventions designed for them can overcome specialization. Currently, the few examples of social robots that were used to implement message interventions typically did so within a single domain, such as healthy eating (Robinson et al., Reference Robinson, Connolly, Hides and Kavanagh2020). However, a multipurpose social robot that can help humans to change in a variety of domains (e.g., from health to pro-environmental behaviour to financial planning) may be both more cost-effective and practical from a usability perspective.

Gamification

Introducing the technological domain

Simply put, gamification is a process of making a game of something that is not a game. In a more academic sense, it refers to the use of game design elements in non-gaming contexts (Baptista & Oliveira, Reference Baptista and Oliveira2019). These game design elements vary greatly and comprise the use of badges (Hamari, Reference Hamari2017), points (Attali & Arieli-Attali, Reference Attali and Arieli-Attali2015), levels (Jones et al., Reference Jones, Madden, Wengreen, Aguilar and Desjardins2014), leader boards (Morschheuser et al., Reference Morschheuser, Hamari and Maedche2018), and avatars (Diefenbach & Müssig, Reference Diefenbach and Müssig2019), to name but a few. The non-gaming contexts to which the design elements can be applied have a broad range, from learning how to use a statistical software to doing household chores (Diefenbach & Müssig, Reference Diefenbach and Müssig2019). Some popular examples of gamification include the Forest app that helps people stay away from their smartphone by planting and growing a virtual tree, or Duolingo, where people can level up as they learn new languages.

Theoretical argument and available evidence

Theoretical support for positive behavioural effects of gamification is grounded in the self-determination theory (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). This theory outlines that humans have three motivational needs – competence, autonomy, and relatedness (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). If an activity satisfies these needs, it is intrinsically motivating. If, however, this is not the case because the activity is driven by external factors such as money, it is extrinsically motivating. Playing games generally fulfils each of the three needs (Przybylski et al., Reference Przybylski, Rigby and Ryan2010, Mekler et al., Reference Mekler, Brühlmann, Tuch and Opwis2017; Koivisto & Hamari, Reference Koivisto and Hamari2019). First, engaging in game playing is typically a voluntary decision undertaken at one's discretion, and it thus promotes autonomy. Game design elements such as creating one's own avatar can further enhance autonomy (Pe-Than et al., Reference Pe-Than, Goh and Lee2014). In terms of competence, the key element of games is challenging the player to overcome various obstacles. Numerous game design elements such as dynamic difficulty adjustment or performance indicators such as leader boards satisfy the need for competence (Pe-Than et al., Reference Pe-Than, Goh and Lee2014). Moreover, the need for relatedness is often satisfied via social environments and in-game interactions (Koivisto & Hamari, Reference Koivisto and Hamari2019). The fulfilment of motivational needs should not only enhance the effectiveness of games through intrinsic motivation but also increase their enjoyment (Pe-Than et al., Reference Pe-Than, Goh and Lee2014).

The empirical research on gamification and behaviour change has focused primarily on the domains of education, physical exercise, and crowdsourcing: around 70% of all the studies were conducted in these domains (Koivisto & Hamari, Reference Koivisto and Hamari2019). Although several studies showed mixed findings, most studies produced positive evidence in support of gamification effectiveness (Seaborn & Fels, Reference Seaborn and Fels2015; Johnson et al., Reference Johnson, Deterding, Kuhn, Staneva, Stoyanov and Hides2016, Reference Johnson, Horton, Mulcahy and Foth2017; Looyestyn et al., Reference Looyestyn, Kernot, Boshoff, Ryan, Edney and Maher2017; Koivisto & Hamari, Reference Koivisto and Hamari2019). The main limitation in this regard is that the research conducted tends to be of low or moderate quality, with many studies using small sample sizes, non-representative samples, or lack of randomization in treatment allocation (Johnson et al., Reference Johnson, Deterding, Kuhn, Staneva, Stoyanov and Hides2016, Reference Johnson, Horton, Mulcahy and Foth2017; Koivisto & Hamari, Reference Koivisto and Hamari2019; Zainuddin et al., Reference Zainuddin, Chu, Shujahat and Perera2020). Furthermore, many studies relied primarily on self-reported measures of outcome variables capturing behaviour change and lacked theoretical foundations for the hypotheses (Seaborn & Fels, Reference Seaborn and Fels2015; Johnson et al., Reference Johnson, Horton, Mulcahy and Foth2017; Koivisto & Hamari, Reference Koivisto and Hamari2019; Zainuddin et al., Reference Zainuddin, Chu, Shujahat and Perera2020). Lastly, only few game design elements were comprehensively investigated (e.g., badges, points, and leader boards; Hamari et al., Reference Hamari, Koivisto and Sarsa2014; Seaborn & Fels, Reference Seaborn and Fels2015, Koivisto & Hamari, Reference Koivisto and Hamari2019), whereas other less typical elements were neglected. Therefore, gamification overall shows a lot of promise for effective behaviour change, but more high-quality studies need to be conducted to maximize its potential.

Future potential

For gamification to be effectively used in behavioural public policy, researchers will first need to comprehensively examine which game design elements and their combinations drive behaviour change. Although a significant advancement has been achieved in this regard, as previously indicated only few of the elements have been extensively and systematically researched so far (Koivisto & Hamari, Reference Koivisto and Hamari2019). In this regard, policy makers will need to increasingly collaborate with computer scientists and game designers, because even if many studies on gamification and behaviour change have been conducted, few of them have been grounded in theories of behaviour change. Input from behavioural scientists is, therefore, essential to fulfil the potential of gamification. An additional challenge to making gamification effective is overjustification (Meske et al., Reference Meske, Brockmann, Wilms, Stieglitz, Stieglitz, Lattemann, Robra-Bissantz, Zarnekow and Brockmann2017). That is, even if games can propel intrinsic motivation as previously discussed, several game design elements such as points can serve as external reinforcements if they are associated with external rewards (e.g., exchanging points won for completing a desired behaviour such as exercise for leisure time or other desirable activities) and therefore diminish intrinsic motivation (Deci, Reference Deci1971; Deci et al., Reference Deci, Koestner and Ryan2001). The main aim for behavioural scientists should, therefore, be to design games that make the desired behaviours that the interventions target rewarding in themselves.

Self-Quantification

Introducing the technological domain

Self-quantification refers to the use of technology to self-track any kind of biological, physical, behavioural, or environmental information (Swan, Reference Swan2013; Maltseva & Lutz, Reference Maltseva and Lutz2018). Some popular examples of the practice include the automatic tracking of physical exercise through wearable devices like smartwatches and fitness trackers, or self-logging of dietary information through various smartphone applications. Self-quantification can also be used in many other areas, from sexual and reproductive behaviour (Lupton, Reference Lupton2015) to participation in green consumption activities (Zhang et al., Reference Zhang, Zhang, Zhang and Li2020). The practice is prevalent in the health domain – almost 70% of the US adult population tracked their exercise, diet, or weight in 2012 (Fox & Duggan, Reference Fox and Duggan2013). The goal of self-quantification is to offer people an insight into their own behaviour, given that the underlying assumption of this practice is that the ‘self-knowledge through numbers’ (Heyen, Reference Heyen and Selke2016, p. 283) can both help people realize which behaviours they may want to change and motivate them to undertake the change (Card et al., Reference Card, Mackinlay and Shneiderman1999; North, Reference North2006; Kersten-van Dijk et al., Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017). Self-quantification is, therefore, also referred to as ‘personal science’ because it involves studying one's own behaviour to answer personal questions (Wolf & De Groot, Reference Wolf and De Groot2020).

Theoretical argument and available evidence

Multiple theoretical arguments suggest that self-quantification can propel behaviour change. The social-cognitive theory outlines two key drivers of this change that are leveraged by self-quantification – self-monitoring and self-reflectiveness (Bandura, Reference Bandura1998, Reference Bandura2001, Reference Bandura2004). Monitoring one's behavioural patterns and the surrounding circumstances is the first prerequisite for modifying a behaviour (Bandura, Reference Bandura1998, Reference Bandura2001). For self-monitoring to be effective in this regard, it is important that the person themselves has selected the behaviours to monitor and the desired end states rather than this being imposed on them, and that they physically record their behaviour throughout the process of monitoring (Harkin et al., Reference Harkin, Webb, Chang, Prestwich, Conner, Kellar and Sheeran2016). Then, by employing self-reflectiveness, which is a metacognitive capacity to reflect on oneself and the adequacy of one's actions and thoughts, they can dwell on the monitored behaviour and examine it in relation to personal goals and standards, which may ultimately lead to insights about changing their behaviour (Bandura, Reference Bandura2001).

Self-quantification supports both self-monitoring and self-reflectiveness. It allows a person to collect the data about their behaviour, thus providing an overview of actions they perform. The person can then reflect about the data by evaluating them against their motives, values, and goals, which may in turn lead to new insights that trigger behaviour change (Ploeder et al., Reference Ploderer, Reitberger, Oinas-Kukkonen and van Gemert-Pijnen2014). For example, a person may monitor how much time they spend on different activities on a weekly basis. Then, by reflecting on the data in relation to their goals and values, they may conclude they do not sufficiently prioritize important personal goals, which may in turn prompt them to incorporate more meaningful activities into their schedule.

Even if there is a reasonable theoretical argument for the positive role of self-quantification in behaviour change, the empirical research on this topic is limited both in quantity and quality. A literature review by Kersten-van Dijk et al. (Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017) indicates that, in most of the studies conducted to date, self-quantification improved people's insights about their behaviour. However, only five articles evaluated the impact of self-quantification on behaviour change, and two of these articles documented positive behavioural effects (Consolvo et al., Reference Consolvo, Klasnja, McDonald, Avrahami, Froehlich, LeGrand, Libby, Mosher and Landay2008; Hori et al., Reference Hori, Tokuda, Miura, Hiyama and Hirose2013). Therefore, whereas self-quantifying one's own behaviour using various technologies is a promising approach to creating behaviour change, policy makers need to further integrate this approach with effective behavioural change techniques to maximize its potential.

Future potential

The use and effectiveness of self-quantification in behavioural public policy will likely depend on two future developments: (1) the extent to which policy makers integrate self-quantification with cutting-edge insights on behaviour change and (2) the advancement of self-tracking technological devices themselves. Concerning the first development, the self-improvement hypothesis at the core of self-quantification posits that gaining insights about one's own behaviour through data should inspire a change (Kersten-van Dijk et al., Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017). In behavioural science, however, it is well known that information itself is not sufficient to modify behaviour (Thaler & Sunstein, Reference Thaler and Sunstein2008; Marteau et al., Reference Marteau, Hollands and Fletcher2012). Indeed, whereas people may decide to change after seeing data about their activities, it is how the data are presented to them that should eventually determine their motivation and prompt the efforts to change (Johnson et al., Reference Johnson, Shu, Dellaert, Fox, Goldstein, Häubl, Larrick, Payne, Peters, Schkade and Wansink2012; Otten et al., Reference Otten, Cheng and Drewnowski2015; Congiu & Moscati, Reference Congiu and Moscati2020). Therefore, to maximize the potential of self-quantification, policy makers should work on developing and testing the tools of effective self-tracking data visualization, and these tools should ideally go beyond the most popular domains such as physical activity or eating and apply to a broad range of domains people may be interested in. The tools would then not only help individuals to understand their own behaviour but also empower them to change in line with their values and preferences. This implies that the person should be free to choose whether they want to use any of the data visualization tools on offer or not, and that policy makers should provide information about the behavioural change strategies implemented in these tools to allow the person to make an informed choice.

Concerning the second development that can aid the effectiveness of self-quantification in behavioural public policy – the advancement of the technology itself – it will be important to devise tools that can track behaviours and people's psychological states more precisely and reliably. Currently, many quantified-self approaches rely on self-reported data because technologies to track the actual behaviours or experienced emotions are either not sufficiently developed or do not yet exist. This is, however, problematic from a usability perspective, because people may want to use self-quantification but simply do not have the time or capacity to manually log their data (Li et al., Reference Li, Dey and Forlizzi2010; Wolf & De Groot, Reference Wolf and De Groot2020). In fact, this need for constant data logging may interfere with their freedom to engage in activities they enjoy or even create potentially unhealthy obsessions with data collection or the technologies involved (Lupton, Reference Lupton2016). In this respect, it is worth knowing that technologies to track behaviour and psychological states are rapidly evolving (e.g., Poria et al., Reference Poria, Cambria, Bajpai and Hussain2017), and more advanced tracking devices are constantly becoming available.

Another potential technological advancement involves developing devices that will not only accurately track behaviours and psychological states, but that will make it easier for people to gain insights regarding which underlying factors shape these behaviours or states. For example, a person may be interested to know how different activities, people they meet, and various contextual factors (e.g., weather; colours, sounds, or smells present in their environment; etc.) shape their future behaviours and emotions. Current technologies can typically track several such factors (e.g., other people present in the situation), but they could potentially evolve to automatically track various other factors that would be of interest to individuals who practise self-quantification. Such data would allow computing models that could clarify whether these factors predict future behaviours and emotional states. It is important to emphasize that in this example we are referring to factors, behaviours, and emotional states of interest to the person practising self-quantification, and we are not advocating that the devices track the data the person is not interested in.

Behavioural Informatics

Introducing the technological domain

Behavioural informatics (BI) is the application of the internet of things (IoT) – the network of any interconnected devices (e.g., mobile phones, smart speakers and other devices, etc.) that can be used to collect and record any type of data created by some form of human behaviour – for the purpose of creating behavioural change (e.g., Swan, Reference Swan2012; Pavel et al., Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015; Fu & Wu, Reference Fu and Wu2018; Rahmani et al., Reference Rahmani, Gia, Negash, Anzanpour, Azimi, Jiang and Liljeberg2018). This can be achieved in many ways and requires the use of sophisticated machine learning algorithms. For example, the health coaching platform proposed by Pavel et al. (Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015) that helps the elderly to improve and manage their health behaviours relies on various devices referred to as sensors that collect data from the person's home environment in real time. These sensors involve contact switches, passive infrared sensors that capture motion, bed cameras, computer keyboards, smartphones, credit card logs, accelerometers, environmental sensors, 3D cameras, and so on. The data from the sensors, together with the self-reported data generated by users via questionnaires concerning their health goals and motivational states, are continuously processed by inference algorithms that generate estimates of behaviours as well as psychological and physical states. These estimates are then used by the coaching platform to provide interventions in real time. For example, if the algorithms infer the person feels sad or depressed, they may prompt a family member or carer to call or visit the person to cheer them up.

Therefore, dynamic personalization (Pavel et al., Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015) is at the core of BI. In other words, based on the data obtained from various devices in real time, machine learning models can constantly compute different variables that are relevant to the behavioural goals of interest (e.g., motivation levels, barriers to meeting the goals, etc.) and then select the best interventions to be implemented (i.e., the interventions that work best based on previous data and/or that have been established as effective by previous theories of behaviour change). Although BI is to some degree linked to self-quantification because it relies on tracking devices that capture data about people's behaviour, it goes beyond self-quantification because its core components are sophisticated algorithms that process various interconnected devices in real time and provide appropriate behavioural interventions.

Theoretical argument and available evidence

One of the advantages of BI is that, rather than being supported by a specific theory, BI platforms can adopt various theories of behavioural change to guide the interventions. For example, Active2Gether (Klein et al., Reference Klein, Manzoor and Mollee2017) is a BI system that encourages physical activity and is based on the social-cognitive theory (Bandura, Reference Bandura2001, Reference Bandura2004). According to the theory as implemented in the system, main determinants of behaviour change are intentions, self-efficacy regarding the behaviour, and outcome expectancies. Other factors that contribute to these main determinants are social norms, long-term goals, potential obstacles, and satisfaction with one's goal progress. Active2Gether tracks how people score on these theoretical components in real time and then selects the appropriate interventions to guide physical activity. For example, if a person currently has low self-efficacy (i.e., low confidence and belief in oneself that s/he can undertake the desired behaviour), then the platform selects simpler behavioural goals (e.g., climbing only one floor instead of five) that the person can easily accomplish and gradually increases their difficulty until the desired difficult behaviour is accomplished.

Given that building and testing BI platforms is a highly challenging endeavour because it requires sophisticated programming knowledge, behavioural change expertise, and the opportunity to access or link various sensors, to our knowledge no BI platform has been rigorously researched to date in terms of its effectiveness. Some preliminary findings based on self-reports (e.g., Fu & Wu, Reference Fu and Wu2018), however, indicate that BI has a considerable future potential to revolutionize behaviour change.

Future Potential

Currently, the number of devices connected to the internet that could potentially be used to track behaviour is estimated to be around 30–35 billion (Statista, 2018). This means that each household on average owns several such devices, and the number is likely to be larger in developed countries. Therefore, the potential of BI to contribute to behaviour change is large, given that these devices generate data that could be continuously processed by algorithms and inform real-time interventions. The main obstacle in this regard is likely a lack of collaboration between behavioural change experts and computer scientists, given that all BI platforms need to be a joint effort of researchers and practitioners working in these domains. Therefore, we encourage behavioural scientists to explore current advancements in BI and potentially form collaborations with computer scientists to create effective BI-based behavioural change platforms.

Overcoming Libertarian Paternalism

Administering behavioural interventions via the overviewed technological tools could overcome libertarian paternalism in several ways. First, this approach would not interfere with people's choice processes and would, therefore, not limit their negative freedom (Grüne-Yanoff, Reference Grüne-Yanoff2012; Gane, Reference Gane2021) because they would need to actively select the technology and the intervention to use only after the choice process has ended (i.e., after they have decided whether and which behaviour they want to change). However, beyond this basic contribution, technology has a potential to empower people to preserve their negative freedom even in environments where they typically have little control. For example, whenever people are outside of their homes, they are at the mercy of policy makers, marketers, and other agents who can change the contexts in which these people act to interfere with their choices and influence them. City councils may implement point-of-decision prompts to increase stair climbing (Soler et al., Reference Soler, Leeks, Buchanan, Brownson, Heath and Hopkins2010), and supermarkets may implement choice architecture that encourages a particular food choice (Wansink, Reference Wansink2016; Huitink et al., Reference Huitink, Poelman, van den Eynde, Seidell and Dijkstra2020). People may not agree with how various places they visit daily attempt to change their behaviour, but they have little power to change this. However, VR and AR would empower them to alter the external environment in a way that prompts actions consistent with their goals, values, and beliefs, and to therefore override unwanted contextual influences imposed by other agents that interfere with their choice processes. In this context, instead of implementing nudges that prompt specific choices ‘in the wild’ and thus limit negative freedom, policy makers could focus on producing VR or AR behaviour change apps that people could use to alter their external environment to be consistent with their behavioural preferences.

Transparency would ensure that technological interventions go beyond negative freedom and achieve positive freedom – the possibility to make choices that allow taking control of one's life and being consistent with one's fundamental purposes (Carter, Reference Carter2009). For the transparency requirement to be met, a technological intervention would need to be accompanied by a summary that outlines how the intervention operates, whether it is supported by scientific evidence, and in which direction it should change behaviour. Although it is not possible to estimate to what degree different people would utilize this information, its presence would allow them to use reflective processes (Stanovich & West, Reference Stanovich and West2000; Strack & Deutsch, Reference Strack and Deutsch2004) and deliberate regarding whether a technological intervention is consistent with their values and gives them enough control. In other words, they would have the option to extensively practise their positive freedom if they wanted to do so. This option could be further extended by allowing them to not only select desired interventions based on adequate information, but to also determine intervention parameters. For example, a gamification intervention could be designed in such a way that people can determine how points are awarded and when, what behavioural goals need to be achieved to level up, how badges are unlocked, and so on. Given that all technological interventions we have overviewed would require access to people's data, positive freedom would also necessitate that people have the option to decide which data they are willing to provide. To be able to make this choice, they would ideally need to be presented with a rationale behind the relevance of different variables for a given intervention, and it would be mandatory that the technology provider clarifies how their data will be handled.

It is important to emphasize that we do not view technology as something that should replace behavioural strategies that were designed to overcome libertarian paternalism, including nudge plus (Banerjee & John, Reference Banerjee and John2020), self-nudging (Reijula & Hertwig, Reference Reijula and Hertwig2020), and boosting (Hertwig & Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017). Instead, we see technology as a tool that can complement and extend these approaches, but also go beyond them. First, the technologies we overviewed can be used to administer interventions compatible with either of the three strategies. For example, nudge plus refers to behavioural change techniques that not only alter the context in which people act but also foster reflection and deliberation about the intervention itself and the behaviour to change. As discussed, the technologies we tackled would nurture reflection and deliberation because they would require the person to select the desired behaviour to change and the intervention compatible with one's values, to possibly adjust intervention parameters, etc., which is consistent with nudge plus. Second, the technologies overviewed can extend the three intervention techniques by making them more engaging and motivating. For example, self-nudging refers to people applying nudges such as framing or prompts on themselves, which may be difficult to do because it requires extensive self-control that can be depleted (Muraven & Baumeister, Reference Muraven and Baumeister2000; Baumeister et al., Reference Baumeister, Vohs and Tice2007). Technology can make self-nudging easier because it can both automatize it and make it more interesting and immersive (e.g., gamifying nudges or presenting them in VR or AR). Finally, technology can go beyond the three intervention techniques because, as discussed, it can empower people to preserve their negative freedom even in environments where they typically have little control by overriding or changing contextual influences in these environments (e.g., AR altering the environment's visual appearance).

Knowledge about the Interventions and Their Mechanisms: An Obstacle to Behavioural Change?

Given that making technological interventions compatible with liberalism requires that the person understands the behavioural change techniques implemented and how they operate, the following question arises: would such an extensive knowledge and freedom of choice impair intervention effectiveness? Although this has not yet been systematically investigated, there are several arguments indicating it should not make interventions ineffective.

The first argument is based on self-determination theory, according to which people's intrinsic motivation to change their behaviour is determined by competence, autonomy, and relatedness (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). Given that transparency and freedom of choice associated with technological interventions should provide people with the sense of autonomy, such interventions could potentially be more intrinsically motivating than interventions that lack these characteristics and thus produce a more durable and long-lasting behavioural change (e.g., Van Der Linden, Reference Van Der Linden2015; Liu et al., Reference Liu, Hau and Zheng2019). The second argument comes from research on personalized persuasion. Studies that have been conducted in this regard (Hirsh et al., Reference Hirsh, Kang and Bodenhausen2012; The Behavioural Insights Team, 2013; Matz et al., Reference Matz, Kosinski, Nave and Stillwell2017; Lavoué et al., Reference Lavoué, Monterrat, Desmarais and George2018; Mills, Reference Mills2020) suggest that personalized behavioural interventions are more effective than the non-personalized ones. Therefore, because the technologies overviewed in the present article would lend themselves to personalization given that they would be linked to the user's specific needs, preferences, and behavioural patterns, it is likely that their effectiveness would benefit from this. As the final argument, we posit that, even if people know how certain interventions operate, this knowledge will not necessarily be salient every time they receive the intervention and it, therefore, does not need to interfere with how they react to the intervention. For example, even if people are aware that defaults change behaviour by making the decision process less cognitively costly (Blumenstock et al., Reference Blumenstock, Callen and Ghani2018), this does not mean they will not be influenced by defaults when they encounter them. For example, Loewenstein et al. (Reference Loewenstein, Bryce, Hagmann and Rajpal2015) showed that, even if people were warned they would receive defaults that would attempt to change their behaviour, the effects of these defaults persisted. Overall, our argument that knowing how behavioural interventions operate should not necessarily hamper their effectiveness is consistent with other articles that tackled this issue (e.g., Banerjee & John, Reference Banerjee and John2020; Reijula & Hertwig, Reference Reijula and Hertwig2020).

New Ethical Issues

Although the new technologies examined in the present article have a potential to create behaviour change while empowering people to make their own choices in this regard, they also raise new ethical issues with implications for freedom of choice. For example, personal data that are collected via self-quantification, social robots, VR and AR, various sensors involved in behavioural informatics, and gamification platforms might potentially be stored by private companies who could use them to influence people more effectively, without their knowledge, to buy products or services they would not otherwise be interested in (Zimmer, Reference Zimmer2010; Kramer et al., Reference Kramer, Guillory and Hancock2014; Verma, Reference Verma2014; Boyd, Reference Boyd2016; Herschel & Miori, Reference Herschel and Miori2017; Gostin et al., Reference Gostin, Halabi and Wilson2018; Rauschnabel et al., Reference Rauschnabel, He and Ro2018; Mathur et al., Reference Mathur, Acar, Friedman, Lucherini, Mayer, Chetty and Narayanan2019; Mavroeidi et al., Reference Mavroeidi, Kitsiou, Kalloniatis and Gritzalis2019). Therefore, although the technological tools would on the surface support liberalism because they would endorse free choice as well as subjectivity or plurality of values, below the surface they could be used to fulfil various goals which are not necessarily aligned with the individual, but with the interests of those who control the technology. Indeed, it is well known that several scandals that reflect this premise happened in the past, such as Cambridge Analytica, where people's data were used for microtargeting without their awareness (Isaak & Hanna, Reference Isaak and Hanna2018; Hinds et al., Reference Hinds, Williams and Joinson2020). This and associated dangers of using new technologies in behaviour change remain a valid concern, given that it cannot be excluded that people's data collected via these technologies will be used to manipulate them in ethically dubious ways.

Data protection policies are continuously advancing; however, further action is necessary to ensure democratic and liberal protection of data. The EU General Data Protection Regulation (GDPR) introduced data protection standards regarding informed consent or algorithmic transparency (Wachter, Reference Wachter2018) and gave consumers the right to access, delete, and opt-out of processing of their data at any time (Politou et al., Reference Politou, Alepis and Patsakis2018; Mondschein & Monda, Reference Mondschein, Monda, Kubben, Dumontier and Dekker2019). Multiple countries worldwide followed, starting to recognize the need for regulation to match the technological progress and protect the privacy of the citizens (Lynskey, Reference Lynskey2014; Oettinger, Reference Oettinger2015). However, opt-out clauses may not be sufficient to ensure sustainable protection of individuals’ privacy. As Viljoen (Reference Viljoen2020) argues, what drives the value as well as danger of the data in digital economy is their relational aspect – the fact that they put individuals into relationships in a population-wide network. Large companies are not interested in individual-level insights of specific subjects, but rather a population-level knowledge. While GDPR and similar legislatures aim at individual-level privacy protection, the population-level protection remains overlooked. To address the urgency of privacy, governments could move towards more democratic institutions of data governance, following the solution proposed by Viljoen (Reference Viljoen2020).

These suggested advancements in the data protection regulation might be supported by the increasing public demand for data protection. Privacy paradox – a discrepancy between users’ concern about their privacy and the fact that they do little to protect their privacy and personal data – is a result of individuals’ risk–benefit calculation and the perception that the risk is low (Barth & de Jong, Reference Barth and De Jong2017). However, recent scandals such as Cambridge Analytica or popular documentaries such as The Social Dilemma or Terms and Conditions May Apply that uncover which data corporations and governments collect and what they use them for, may help change the risk–benefit ratio and risk perception of the matter. For example, making the data privacy abuse concrete and psychologically close may motivate people to overcome this paradox, which is aligned with the construal level theory (Spence et al., Reference Spence, Poortinga and Pidgeon2012). A recently published report is consistent with this premise, as it indicates that, in this age when people are being increasingly exposed to information about data privacy abuse through the media, 75% of the US adults support more government regulation concerning the personal information that data companies can store and what they can do with it (Auxier et al., Reference Auxier, Rainie, Anderson, Perrin, Kumar and Turner2019). With increasing public demand for data protection, policymakers should offer legislative solutions that would not only protect the data of the customers, but also provide secure framework for behavioural science interventions supported by new technologies.

Additional Policy Considerations

Finally, it is important to address the remaining practical challenges that might hamper the application of the new technologies we have overviewed in the policy context. The first challenge is scalability. The use of all the technologies we have discussed for administering behavioural interventions at least to some degree depends on stable and fast internet connection. However, there is currently a significant urban-rural divide in internet coverage. In Europe, for instance, only 59% of households in rural areas have access to high-speed broadband internet, compared with roughly 86% of total EU households (DiMaggio & Hargittai, Reference DiMaggio and Hargittai2001; European Commission, 2020). Therefore, the extent to which the new technologies will be scalable in the future will depend on how rapidly fast internet technologies (e.g., Fiber-To-The-Premises or 5G) develop and become adopted.

Furthermore, implementation of the new technologies has a potential to create negative spillovers that might outweigh the benefits they create (Truelove et al., Reference Truelove, Carrico, Weber, Raimi and Vandenbergh2014). For example, whereas humanoid social robots can serve as messengers to prompt people to undertake various behaviours, they could also potentially replace other humans, both as companions and intimate partners, which might negatively affect birth rates. This could be problematic for various developed countries struggling with falling birth rates, such as Japan or the United States (Kramer, Reference Kramer2013). Whereas social robots that fulfil people's intimate and/or sexual needs could have a positive impact on health (Döring & Pöschl, Reference Döring and Pöschl2018), they might create further pressures on demographic development if they influence individuals to opt-out of reproductive sexual relationships (Scheutz & Arnold, Reference Scheutz and Arnold2016; Danaher et al., Reference Danaher, Earp, Sandberg, Danaher and McArthur2017). Overall, this is only one example of a potentially negative spillover of the technologies we cover, and each of these technologies could be linked to other negative spillovers. Therefore, before the new technologies can be implemented to administer behavioural interventions on a large scale, policy makers will need to systematically evaluate their potential negative spillovers.

Finally, the introduction of the new technologies as an alternative policy tool might result in a negative shift of the policy focus from a strategic and contextual to a more piecemeal approach. For example, we have discussed that VR or AR can empower people to alter the context in which they act and potentially reduce the manipulative influence of external agents such as marketeers on their behaviour. Whereas this may be a desirable outcome from the users’ point of view, it would constitute only a piecemeal solution because it would divert further responsibility on the individual, as opposed to organizations which should provide a cleaner, safer, and better organized context for its population. Moreover, using VR or AR for this purpose could discourage policy makers from undertaking the effortful process of developing a more strategic regulatory framework that would limit the manipulative impact of marketeers and large organizations on the context in which people act. Therefore, it is important that policy makers do not use new technologies as a quick solution for policy challenges that need to be tackled in a more strategic way.

Conclusion

In the present article, we proposed that one way of making behavioural science interventions less paternalistic could be by integrating them with cutting edge developments in technology. We covered five emerging technological domains – virtual and augmented reality, social robotics, gamification, self-quantification, and behavioural informatics – and examined their current state of development, potential compatibility with techniques of behaviour change, and how using them to alter behaviour could overcome libertarian paternalism. In this regard, we argued that the interventions delivered using these technologies would be aligned with liberal principles because they would require that people deliberately choose which behaviours they want to change (if any) and select the desired technological tools and interventions for this purpose. Moreover, the interventions would be described in a user-friendly way to ensure transparency and compatibility with users’ values and beliefs. Importantly, we do not expect that the integration between behavioural science and the cutting-edge technologies could be achieved immediately. As discussed, there are several impediments to this, including that some technologies are not yet fully scalable or usable and are associated with some potential ethical issues. The main purpose of this article is to encourage behavioural scientists to start more rigorously exploring the technologies we discussed and designing testable behavioural change tools for these technologies. This will speed up the integration of the two domains and lead to the new age of liberal behavioural interventions that enable extensive freedom of choice.

Footnotes

1 In this context, it is important to point out that the evidence regarding uncanny valley is inconsistent – whereas findings show that human-like robots can cause eeriness and revulsion when they contain certain non-human features, it remains unclear which specific features lead to this reaction, at what objectively defined levels of human–robot similarity, and why (Kätsyri et al., Reference Kätsyri, Förger, Mäkäräinen and Takala2015; MacDorman & Chattopadhyay, Reference MacDorman and Chattopadhyay2016).

References

Abbott, R., Orr, N., McGill, P., Whear, R., Bethel, A., Garside, R., Stein, K. and Thompson-Coon, J. (2019), ‘How do “robopets” impact the health and well-being of residents in care homes? A systematic review of qualitative and quantitative evidence’, International Journal of Older People Nursing, 14(3): e12239.CrossRefGoogle Scholar
Abdi, J., Al-Hindawi, A., Ng, T. and Vizcaychipi, M. P. (2018), ‘Scoping review on the use of socially assistive robot technology in elderly care’, BMJ Open, 8(2): e018815.CrossRefGoogle ScholarPubMed
Alberto, R. and Salazar, V. (2012), ‘Libertarian paternalism and the dangers of nudging consumers’, King's Law Journal, 23(1): 5167.CrossRefGoogle Scholar
Attali, Y. and Arieli-Attali, M. (2015), ‘Gamification in assessment: Do points affect test performance?Computers & Education, 83: 5763.CrossRefGoogle Scholar
Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M. and Turner, E. (2019), Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center: Internet, Science & Tech (blog). November 15, 2019.Google Scholar
Bailey, J. O., Bailenson, J. N., Flora, J., Armel, K. C., Voelker, D. and Reeves, B. (2015), ‘The impact of vivid messages on reducing energy consumption related to hot water use’, Environment and Behavior, 47(5): 570592.CrossRefGoogle Scholar
Banakou, D., Hanumanthu, P. D. and Slater, M. (2016), ‘Virtual embodiment of white people in a black virtual body leads to a sustained reduction in their implicit racial bias’, Frontiers in Human Neuroscience, 10: 601.CrossRefGoogle Scholar
Banakou, D., Kishore, S. and Slater, M. (2018), ‘Virtually being Einstein results in an improvement in cognitive task performance and a decrease in age bias’, Frontiers in Psychology, 9: 917.CrossRefGoogle Scholar
Bandura, A. (1997), Self-efficacy: The exercise of control. New York: Freeman.Google Scholar
Bandura, A. (1998), ‘Health promotion from the perspective of social cognitive theory’, Psychology and Health, 13(4): 623649.CrossRefGoogle Scholar
Bandura, A. (2001), ‘Social cognitive theory: An agentic perspective’, Annual Review of Psychology, 52(1): 126.CrossRefGoogle ScholarPubMed
Bandura, A. (2004), ‘Health promotion by social cognitive means’, Health Education & Behavior, 31(2): 143164.CrossRefGoogle ScholarPubMed
Banerjee, S. and John, P. (2020), ‘Nudge plus: Incorporating reflection into behavioural public policy’, Behavioural Public Policy, 116.Google Scholar
Baptista, G. and Oliveira, T. (2019), ‘Gamification and serious games: A literature meta-analysis and integrative model’, Computers in Human Behavior, 92: 306315.CrossRefGoogle Scholar
Barth, S. and De Jong, M. D. (2017), ‘The privacy paradox–Investigating discrepancies between expressed privacy concerns and actual online behavior–A systematic literature review’, Telematics and Informatics, 34(7): 10381058.CrossRefGoogle Scholar
Bartneck, C. and Forlizzi, J. (2004), ‘A design-centred framework for social human-robot interaction’, in RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759), IEEE, 591594.CrossRefGoogle Scholar
Bartneck, C., Nomura, T., Kanda, T., Suzuki, T. and Kennsuke, K. (2005), A cross-cultural study on attitudes towards robots. HCI International. 10.13140/RG.2.2.35929.11367.Google Scholar
Barton, A. and Grüne-Yanoff, T. (2015), ‘From libertarian paternalism to nudging—and beyond’, Review of Philosophy and Psychology, 6(3): 341359.CrossRefGoogle Scholar
Baumeister, R. F., Vohs, K. D. and Tice, D. M. (2007), ‘The strength model of self-control’, Current Directions in Psychological Science, 16(6): 351355.CrossRefGoogle Scholar
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B. and Tanaka, F. (2018), ‘Social robots for education: A review’, Science Robotics, 3(21): Eaat5954.CrossRefGoogle ScholarPubMed
Blumenstock, J., Callen, M. and Ghani, T. (2018), ‘Why do defaults affect behavior? Experimental evidence from Afghanistan’, American Economic Review, 108(10): 28682901.CrossRefGoogle Scholar
Borenstein, J. and Arkin, R. C. (2017), ‘Nudging for good: robots and the ethical appropriateness of nurturing empathy and charitable behavior’, Ai & Society, 32(4): 499507.CrossRefGoogle Scholar
Boyd, D. (2016), ‘Untangling research and practice: What Facebook's “emotional contagion” study teaches us’, Research Ethics, 12(1): 413.CrossRefGoogle Scholar
Brehm, J. W. (1966), A theory of psychological reactance. New York: Academic Press.Google Scholar
Brehm, S. S. and Brehm, J. W. (2013), Psychological reactance: A theory of freedom and control. New York: Academic Press.Google Scholar
Broadbent, E. (2017), ‘Interactions with robots: The truths we reveal about ourselves’, Annual Review of Psychology, 68: 627652.CrossRefGoogle ScholarPubMed
Brügger, A. (2020), ‘Understanding the psychological distance of climate change: The limitations of construal level theory and suggestions for alternative theoretical perspectives’, Global Environmental Change, 60: 102023.CrossRefGoogle Scholar
Bruyneel, S. D. and Dewitte, S. (2012), ‘Engaging in self-regulation results in low-level construals’, European Journal of Social Psychology, 42(6): 763769.CrossRefGoogle Scholar
Card, S. K., Mackinlay, J. D. and Shneiderman, B. (1999), Using vision to think. Readings in information visualization. San Francisco, CA, USA: Morgan Kaufmann Publishers.Google Scholar
Carter, I. (2009), ‘Positive and negative liberty’, in E. N. Zalta (ed.), The Stanford encyclopedia of philosophy. Retrieved from: http://plato.stanford.edu/entries/liberty-positive-negative/.Google Scholar
Casaccia, S., Revel, G. M., Scalise, L., Bevilacqua, R., Rossi, L., Paauwe, R. A., Karkowsky, I., Ercoli, I., Serrano, J. A., Suijkerbuijk, S. and Lukkien, D. (2019), ‘Social robot and sensor network in support of activity of daily living for people with dementia’, in Dementia Lab Conference, Cham: Springer, 128135.CrossRefGoogle Scholar
Chu, H. and Yang, J. Z. (2018), ‘Taking climate change here and now–mitigating ideological polarization with psychological distance’, Global Environmental Change, 53: 174181.CrossRefGoogle Scholar
Cialdini, R. B. and Cialdini, R. B. (2007), Influence: The psychology of persuasion (Vol. 55). New York: Collins.Google Scholar
Ciechanowski, L., Przegalinska, A., Magnuski, M. and Gloor, P. (2019), ‘In the shades of the uncanny valley: An experimental study of human–chatbot interaction’, Future Generation Computer Systems, 92: 539548.CrossRefGoogle Scholar
Congiu, L. and Moscati, I. (2020), ‘Message and environment: A framework for nudges and choice architecture’, Behavioural Public Policy, 4(1): 7187.CrossRefGoogle Scholar
Consolvo, S., Klasnja, P., McDonald, D. W., Avrahami, D., Froehlich, J., LeGrand, L., Libby, R., Mosher, K. and Landay, J. A. (2008), ‘Flowers or a robot army? Encouraging awareness & activity with personal, mobile displays’, in Proceedings of the 10th International Conference on Ubiquitous Computing, 5463.CrossRefGoogle Scholar
Danaher, J., Earp, B. D. and Sandberg, A. (2017), ‘Should we campaign against sex robots?’ in Danaher, J. and McArthur, N. (eds), Robot sex: Social and ethical implications, Cambridge: MIT Press. Available on PhilArchive: Retrieved from: https://philarchive.org/archive/DANSWCCrossRefGoogle Scholar
Deci, E. L. (1971), ‘Effects of externally mediated rewards on intrinsic motivation’, Journal of Personality and Social Psychology, 18(1): 105115.CrossRefGoogle Scholar
Deci, E. L. and Ryan, R. M. (2000), ‘The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior’, Psychological Inquiry, 11(4): 227268.CrossRefGoogle Scholar
Deci, E. L., Koestner, R. and Ryan, R. M. (2001), ‘Extrinsic rewards and intrinsic motivation in education: Reconsidered once again’, Review of Educational Research, 71(1): 127.CrossRefGoogle Scholar
De Lange, M. A., Debets, L. W., Ruitenburg, K. and Holland, R. W. (2012), ‘Making less of a mess: Scent exposure as a tool for behavioral change’, Social Influence, 7(2): 9097.CrossRefGoogle Scholar
Diefenbach, S. and Müssig, A. (2019), ‘Counterproductive effects of gamification: An analysis on the example of the gamified task manager Habitica’, International Journal of Human-Computer Studies, 127: 190210.CrossRefGoogle Scholar
DiMaggio, P. and Hargittai, E. (2001), From the ‘digital divide’ to ‘digital inequality’: Studying Internet use as penetration increases. Princeton: Center for Arts and Cultural Policy Studies, Woodrow Wilson School, Princeton University, 4(1): 4–2.Google Scholar
Dolan, P., Hallsworth, M., Halpern, D., King, D., Metcalfe, R. and Vlaev, I. (2012), ‘Influencing behaviour: The mindspace way’, Journal of Economic Psychology, 33(1): 264277.CrossRefGoogle Scholar
Döring, N. and Pöschl, S. (2018), ‘Sex toys, sex dolls, sex robots: Our under-researched bed-fellows’, Sexologies, 27(3): e51e55.CrossRefGoogle Scholar
Eachus, P. (2001), ‘Pets, people and robots: The role of companion animals and robopets in the promotion of health and well-being’, International Journal of Health Promotion and Education, 39(1): 713.CrossRefGoogle Scholar
El Kamali, M., Angelini, L., Caon, M., Carrino, F., Röcke, C., Guye, S., Rizzo, G., Mastropietro, A., Sykora, M., Elayan, S. and Kniestedt, I. (2020), ‘Virtual coaches for older adults’ wellbeing: A systematic review’, IEEE Access, 8: 101884101902.CrossRefGoogle Scholar
Elmqaddem, N. (2019), ‘Augmented reality and virtual reality in education. Myth or reality?International Journal of Emerging Technologies in Learning, 14(03): 234242.CrossRefGoogle Scholar
European Commission (2020), Digital Economy and Society Index (DESI). Digital Economy and Society Index (DESI) 2020. Retrieved from: https://digital-strategy.ec.europa.eu/en/policies/desi.Google Scholar
Fasola, J. and Mataric, M. J. (2012), ‘Using socially assistive human–robot interaction to motivate physical exercise for older adults’, Proceedings of the IEEE, 100(8): 25122526.CrossRefGoogle Scholar
Fox, S. and Duggan, M. (2013), Tracking for health. Pew Research Center's Internet & American Life Project. Retrieved from: https://www.pewresearch.org/internet/wp-content/uploads/sites/9/media/Files/Reports/2013/PIP_TrackingforHealth-with-appendix.pdfGoogle Scholar
Fu, Y. and Wu, W. (2018), ‘Behavioural informatics for improving water hygiene practice based on IoT environment’, Journal of Biomedical Informatics, 78: 156166.CrossRefGoogle ScholarPubMed
Gane, N. (2021), ‘Nudge economics as libertarian paternalism’, Theory, Culture & Society, 38(6): 119142.CrossRefGoogle Scholar
Ghazali, A. S., Ham, J., Barakova, E. I. and Markopoulos, P. (2018), ‘Effects of robot facial characteristics and gender in persuasive human-robot interaction’, Frontiers in Robotics and AI, 5: 73.CrossRefGoogle ScholarPubMed
Gill, N. and Gill, M. (2012), ‘The limits to libertarian paternalism: Two new critiques and seven best-practice imperatives’, Environment and Planning C: Government and Policy, 30(5): 924940.CrossRefGoogle Scholar
Gostin, L. O., Halabi, S. F. and Wilson, K. (2018), ‘Health data and privacy in the digital era’, JAMA, 320(3): 233234.CrossRefGoogle ScholarPubMed
Granulo, A., Fuchs, C. and Puntoni, S. (2019), ‘Psychological reactions to human versus robotic job replacement’, Nature Human Behaviour, 3(10): 10621069.CrossRefGoogle ScholarPubMed
Grüne-Yanoff, T. (2012), ‘Old wine in new casks: Libertarian paternalism still violates liberal principles’, Social Choice and Welfare, 38(4): 635645.CrossRefGoogle Scholar
Halpern, D. (2015), ‘The rise of psychology in policy: The UK's de facto Council of Psychological Science Advisers’, Perspectives on Psychological Science, 10(6): 768771.CrossRefGoogle ScholarPubMed
Hamari, J. (2017), ‘Do badges increase user activity? A field experiment on the effects of gamification’, Computers in Human Behavior, 71: 469478.CrossRefGoogle Scholar
Hamari, J., Koivisto, J. and Sarsa, H. (2014), ‘Does gamification work? A literature review of empirical studies on gamification’, in 2014 47th Hawaii International Conference on System Sciences, IEEE, 30253034.CrossRefGoogle Scholar
Hamilton-Giachritsis, C., Banakou, D., Quiroga, M. G., Giachritsis, C. and Slater, M. (2018), ‘Reducing risk and improving maternal perspective-taking and empathy using virtual embodiment’, Scientific Reports, 8(1): 110.CrossRefGoogle ScholarPubMed
Hansen, P. G. (2016), ‘The definition of nudge and libertarian paternalism: Does the hand fit the glove?European Journal of Risk Regulation, 7(1): 155174.CrossRefGoogle Scholar
Harkin, B., Webb, T. L., Chang, B. P., Prestwich, A., Conner, M., Kellar, I. and Sheeran, P. (2016), ‘Does monitoring goal progress promote goal attainment? A meta-analysis of the experimental evidence’, Psychological Bulletin, 142(2): 198229.CrossRefGoogle ScholarPubMed
Heilmann, C. (2014), ‘Success conditions for nudges: A methodological critique of libertarian paternalism’, European Journal for Philosophy of Science, 4(1): 7594.CrossRefGoogle Scholar
Herschel, R. and Miori, V. M. (2017), ‘Ethics & big data’, Technology in Society, 49: 3136.CrossRefGoogle Scholar
Hershfield, H. E., Goldstein, D. G., Sharpe, W. F., Fox, J., Yeykelis, L., Carstensen, L. L. and Bailenson, J. N. (2011), ‘Increasing saving behavior through age-progressed renderings of the future self’, Journal of Marketing Research, 48(SPL): 2337.CrossRefGoogle ScholarPubMed
Hertwig, R. and Grüne-Yanoff, T. (2017), ‘Nudging and boosting: Steering or empowering good decisions’, Perspectives on Psychological Science, 12(6): 973986.CrossRefGoogle ScholarPubMed
Heyen, N. B. (2016), ‘Self-tracking as knowledge production: Quantified self between modelling and citizen science’, in Selke, S. (ed), Lifelogging, Wiesbaden: Springer VS, 283301.CrossRefGoogle Scholar
Hinds, J., Williams, E. J. and Joinson, A. N. (2020), ‘“It wouldn't happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal’, International Journal of Human-Computer Studies, 143: 102498.CrossRefGoogle Scholar
Hirsh, J. B., Kang, S. K. and Bodenhausen, G. V. (2012), ‘Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits’, Psychological Science, 23(6): 578581.CrossRefGoogle ScholarPubMed
Hori, Y., Tokuda, Y., Miura, T., Hiyama, A. and Hirose, M. (2013), ‘Communication pedometer: a discussion of gamified communication focused on frequency of smiles’, in Proceedings of the 4th Augmented Human International Conference, 206212.CrossRefGoogle Scholar
Huitink, M., Poelman, M. P., van den Eynde, E., Seidell, J. C. and Dijkstra, S. C. (2020), ‘Social norm nudges in shopping trolleys to promote vegetable purchases: A quasi-experimental study in a supermarket in a deprived urban area in the Netherlands’, Appetite, 153: 104655.CrossRefGoogle Scholar
Isaak, J. and Hanna, M. J. (2018), ‘User data privacy: Facebook, Cambridge Analytica, and privacy protection’, Computer, 51(8): 5659.CrossRefGoogle Scholar
Johnson, E. J., Shu, S. B., Dellaert, B. G., Fox, C., Goldstein, D. G., Häubl, G., Larrick, R. P., Payne, J. W., Peters, E., Schkade, D. and Wansink, B. (2012), ‘Beyond nudges: Tools of a choice architecture’, Marketing Letters, 23(2): 487504.CrossRefGoogle Scholar
Johnson, D., Deterding, S., Kuhn, K. A., Staneva, A., Stoyanov, S. and Hides, L. (2016), ‘Gamification for health and wellbeing: A systematic review of the literature’, Internet Interventions, 6: 89106.CrossRefGoogle ScholarPubMed
Johnson, D., Horton, E., Mulcahy, R. and Foth, M. (2017), ‘Gamification and serious games within the domain of domestic energy consumption: A systematic review’, Renewable and Sustainable Energy Reviews, 73: 249264.CrossRefGoogle Scholar
Jones, B. A., Madden, G. J., Wengreen, H. J., Aguilar, S. S. and Desjardins, E. A. (2014), ‘Gamification of dietary decision-making in an elementary-school cafeteria’, PLoS One, 9(4): e93872.CrossRefGoogle Scholar
Jones, C., Hine, D. W. and Marks, A. D. (2017), ‘The future is now: reducing psychological distance to increase public engagement with climate change’, Risk Analysis, 37(2): 331341.CrossRefGoogle ScholarPubMed
Kätsyri, J., Förger, K., Mäkäräinen, M. and Takala, T. (2015), ‘A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness’, Frontiers in Psychology, 6: 390.CrossRefGoogle Scholar
Kätsyri, J., de Gelder, B. and Takala, T. (2019), ‘Virtual faces evoke only a weak uncanny valley effect: An empirical investigation with controlled virtual face images’, Perception, 48(10): 968991.CrossRefGoogle Scholar
Kersten-van Dijk, E. T., Westerink, J. H., Beute, F. and Ijsselsteijn, W. A. (2017), ‘Personal informatics, self-insight, and behavior change: A critical review of current literature’, Human–Computer Interaction, 32(5–6): 268296.CrossRefGoogle Scholar
Kim, H., Schnall, S. and White, M. P. (2013), ‘Similar psychological distance reduces temporal discounting’, Personality and Social Psychology Bulletin, 39(8): 10051016.CrossRefGoogle ScholarPubMed
Klein, M. C., Manzoor, A. and Mollee, J. S. (2017), ‘Active2Gether: A personalized m-health intervention to encourage physical activity’, Sensors, 17(6): 1436.CrossRefGoogle ScholarPubMed
Kogut, T., Ritov, I., Rubaltelli, E. and Liberman, N. (2018), ‘How far is the suffering? The role of psychological distance and victims’ identifiability in donation decisions’, Judgment and Decision Making, 13(5): 458.CrossRefGoogle Scholar
Koivisto, J. and Hamari, J. (2019), ‘The rise of motivational information systems: A review of gamification research’, International Journal of Information Management, 45: 191210.CrossRefGoogle Scholar
Kramer, S. P. (2013), The other population crisis: What governments can do about falling birth rates. Washington, District of Columbia: United States: Woodrow Wilson Center Press/Johns Hopkins University Press.Google Scholar
Kramer, A. D., Guillory, J. E. and Hancock, J. T. (2014), ‘Experimental evidence of massive-scale emotional contagion through social networks’, Proceedings of the National Academy of Sciences, 111(24): 87888790.CrossRefGoogle ScholarPubMed
Lades, L. K. and Delaney, L. (2020), ‘Nudge FORGOOD’, Behavioural Public Policy, 120.Google Scholar
Lanier, M., Waddell, T. F., Elson, M., Tamul, D. J., Ivory, J. D. and Przybylski, A. (2019), ‘Virtual reality check: Statistical power, reported results, and the validity of research on the psychology of virtual reality and immersive environments’, Computers in Human Behavior, 100: 7078.CrossRefGoogle Scholar
Lavoué, E., Monterrat, B., Desmarais, M. and George, S. (2018), ‘Adaptive gamification for learning environments’, IEEE Transactions on Learning Technologies, 12(1): 1628.CrossRefGoogle Scholar
Le Grand, J. (2020), ‘Some challenges to the new paternalism’, Behavioural Public Policy, 112.Google Scholar
Li, I., Dey, A. and Forlizzi, J. (2010), ‘A stage-based model of personal informatics systems’, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 557566.CrossRefGoogle Scholar
Liu, Y., Hau, K. T. and Zheng, X. (2019), ‘Do both intrinsic and identified motivations have long-term effects?The Journal of Psychology, 153(3): 288306.CrossRefGoogle ScholarPubMed
Loewenstein, G. and Chater, N. (2017), ‘Putting nudges in perspective’, Behavioural Public Policy, 1(1): 26.CrossRefGoogle Scholar
Loewenstein, G., Bryce, C., Hagmann, D. and Rajpal, S. (2015), ‘Warning: You are about to be nudged’, Behavioral Science & Policy, 1(1): 3542.CrossRefGoogle Scholar
Looije, R., Neerincx, M. A. and Cnossen, F. (2010), ‘Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors’, International Journal of Human-Computer Studies, 68(6): 386397.CrossRefGoogle Scholar
Looije, R., van der Zalm, A., Neerincx, M. A. and Beun, R. J. (2012), Help, I need some body: The effect of embodiment on playful learning, in 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, IEEE, 718724.CrossRefGoogle Scholar
Looyestyn, J., Kernot, J., Boshoff, K., Ryan, J., Edney, S. and Maher, C. (2017), ‘Does gamification increase engagement with online programs? A systematic review’, PLoS One, 12(3): e0173403.CrossRefGoogle ScholarPubMed
Lupton, D. (2015), ‘Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps’, Culture, Health & Sexuality, 17(4): 440453.CrossRefGoogle ScholarPubMed
Lupton, D. (2016), The quantified self. Cambridge: John Wiley & Sons.Google Scholar
Lynskey, O. (2014), ‘Deconstructing data protection: The ‘added-value’ of a right to data protection in the EU legal order’, International & Comparative Law Quarterly, 63(3): 569597.CrossRefGoogle Scholar
MacDorman, K. F. and Chattopadhyay, D. (2016), ‘Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not’, Cognition, 146: 190205.CrossRefGoogle ScholarPubMed
Maltseva, K. and Lutz, C. (2018), ‘A quantum of self: A study of self-quantification and self-disclosure’, Computers in Human Behavior, 81: 102114.CrossRefGoogle Scholar
Marteau, T. M., Hollands, G. J. and Fletcher, P. C. (2012), ‘Changing human behavior to prevent disease: The importance of targeting automatic processes’, Science, 337(6101): 14921495.CrossRefGoogle ScholarPubMed
Mathur, M. B. and Reichling, D. B. (2016), ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’, Cognition, 146: 2232.CrossRefGoogle ScholarPubMed
Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M. and Narayanan, A. (2019), ‘Dark patterns at scale: Findings from a crawl of 11 K shopping websites’, in Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–32.CrossRefGoogle Scholar
Matsuo, Y., Miki, S., Takeda, T. and Kubota, N. (2015), ‘Self-efficacy estimation for health promotion support with robot partner’, in 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 758762.CrossRefGoogle Scholar
Matz, S. C., Kosinski, M., Nave, G. and Stillwell, D. J. (2017), ‘Psychological targeting as an effective approach to digital mass persuasion’, Proceedings of the National Academy of Sciences, 114(48): 1271412719.CrossRefGoogle ScholarPubMed
Mavroeidi, A. G., Kitsiou, A., Kalloniatis, C. and Gritzalis, S. (2019), ‘Gamification vs. privacy: Identifying and analysing the major concerns’, Future Internet, 11(3): 67.CrossRefGoogle Scholar
Mehenni, H. A., Kobylyanskaya, S., Vasilescu, I. and Devillers, L (2020), ‘Nudges with conversational agents and social robots: A first experiment with children at a primary school’, in D'Haro, L. F., Callejas, Z., and Nakamura, S. (eds), Conversational dialogue systems for the next decade, Singapore: Springer, 257270.Google Scholar
Mekler, E. D., Brühlmann, F., Tuch, A. N. and Opwis, K. (2017), ‘Towards understanding the effects of individual gamification elements on intrinsic motivation and performance’, Computers in Human Behavior, 71: 525534.CrossRefGoogle Scholar
Meske, C., Brockmann, T., Wilms, K. and Stieglitz, S (2017), ‘Social collaboration and gamification’, in Stieglitz, S., Lattemann, C., Robra-Bissantz, S., Zarnekow, R., and Brockmann, T. (eds), Gamification, Cham: Springer, 93109.CrossRefGoogle Scholar
Michie, S., Van Stralen, M. M. and West, R. (2011), ‘The behaviour change wheel: A new method for characterising and designing behaviour change interventions’, Implementation Science, 6(1): 42.CrossRefGoogle ScholarPubMed
Mills, S. (2020), ‘Personalized nudging’, Behavioural Public Policy, 110.Google Scholar
Mondschein, C. F. and Monda, C. (2019), ‘The EU's General Data Protection Regulation (GDPR) in a research context’, in Kubben, P., Dumontier, M., and Dekker, A. (eds), Fundamentals of clinical data science, Cham: Springer, 5571.CrossRefGoogle Scholar
Mongin, P. and Cozic, M. (2018), ‘Rethinking nudge: Not one but three concepts’, Behavioural Public Policy, 2(1): 107124.CrossRefGoogle Scholar
Morschheuser, B., Hamari, J. and Maedche, A. (2019), ‘Cooperation or competition–When do people contribute more? A field experiment on gamification of crowdsourcing’, International Journal of Human-Computer Studies, 127: 724.CrossRefGoogle Scholar
Muraven, M. and Baumeister, R. F. (2000), ‘Self-regulation and depletion of limited resources: Does self-control resemble a muscle?Psychological Bulletin, 126(2): 247259.CrossRefGoogle ScholarPubMed
Nelson, K. M., Anggraini, E. and Schlüter, A. (2020), ‘Virtual reality as a tool for environmental conservation and fundraising’, PLos One, 15(4): e0223631.CrossRefGoogle ScholarPubMed
Ng, Y. L., Ma, F., Ho, F. K., Ip, P. and Fu, K. W. (2019), ‘Effectiveness of virtual and augmented reality-enhanced exercise on physical activity, psychological outcomes, and physical performance: A systematic review and meta-analysis of randomized controlled trials’, Computers in Human Behavior, 99: 278291.CrossRefGoogle Scholar
Nomura, T., Kanda, T. and Suzuki, T. (2006), ‘Experimental investigation into influence of negative attitudes toward robots on human–robot interaction’, AI & Society, 20(2): 138150.CrossRefGoogle Scholar
North, C. (2006), ‘Toward measuring visualization insight’, IEEE Computer Graphics and Applications, 26(3): 69.CrossRefGoogle ScholarPubMed
Oettinger, G., 2015. Europe's future is digital, Speech at Hannover Messe. Speech 15, 4772. Retrieved from: http://europa.eu/rapid/press-release_SPEECH-15-4772_en.htmGoogle Scholar
Oliver, A. (2013), Behavioural public policy. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Oliver, A. (2019), ‘Towards a new political economy of behavioral public policy’, Public Administration Review, 79(6): 917924.CrossRefGoogle Scholar
Otten, J. J., Cheng, K. and Drewnowski, A. (2015), ‘Infographics and public policy: Using data visualization to convey complex information’, Health Affairs, 34(11): 19011907.CrossRefGoogle ScholarPubMed
Pavel, M., Jimison, H. B., Korhonen, I., Gordon, C. M. and Saranummi, N. (2015), ‘Behavioral informatics and computational modelling in support of proactive health management and care’, IEEE Transactions on Biomedical Engineering, 62(12): 27632775.CrossRefGoogle ScholarPubMed
Pe-Than, E. P. P., Goh, D. H. L. and Lee, C. S. (2014), ‘Making work fun: Investigating antecedents of perceived enjoyment in human computation games for information sharing’, Computers in Human Behavior, 39: 8899.CrossRefGoogle Scholar
Pennisi, P., Tonacci, A., Tartarisco, G., Billeci, L., Ruta, L., Gangemi, S. and Pioggia, G. (2016), ‘Autism and social robotics: A systematic review’, Autism Research, 9(2): 165183.CrossRefGoogle ScholarPubMed
Ploderer, B., Reitberger, W., Oinas-Kukkonen, H. and van Gemert-Pijnen, J. (2014), ‘Social interaction and reflection for behaviour change’, Personal and Ubiquitous Computing, 18: 16671676. doi:10.1007/s00779-014-0779-y.CrossRefGoogle Scholar
Politou, E., Alepis, E. and Patsakis, C. (2018), ‘Forgetting personal data and revoking consent under the GDPR: Challenges and proposed solutions’, Journal of Cybersecurity, 4(1): tyy001.CrossRefGoogle Scholar
Poria, S., Cambria, E., Bajpai, R. and Hussain, A. (2017), ‘A review of affective computing: From unimodal analysis to multimodal fusion’, Information Fusion, 37: 98125.CrossRefGoogle Scholar
Przybylski, A. K., Rigby, C. S. and Ryan, R. M. (2010), ‘A motivational model of video game engagement’, Review of General Psychology, 14(2): 154166.CrossRefGoogle Scholar
Rahmani, A. M., Gia, T. N., Negash, B., Anzanpour, A., Azimi, I., Jiang, M. and Liljeberg, P. (2018), ‘Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach’, Future Generation Computer Systems, 78: 641658.CrossRefGoogle Scholar
Rauschnabel, P. A., He, J. and Ro, Y. K. (2018), ‘Antecedents to the adoption of augmented reality smart glasses: A closer look at privacy risks’, Journal of Business Research, 92: 374384.CrossRefGoogle Scholar
Rebonato, R. (2014), ‘A critical assessment of libertarian paternalism’, Journal of Consumer Policy, 37: 357396.CrossRefGoogle Scholar
Reeves, B., Wise, K., Maldonado, H., Kogure, K., Shinozawa, K. and Naya, F. (2003), Robots versus on-screen agents: Effects on social and emotional responses. In CHI 2003.Google Scholar
Reijula, S. and Hertwig, R. (2020), ‘Self-nudging and the citizen choice architect’, Behavioural Public Policy, 131.Google Scholar
Riva, G., Baños, R. M., Botella, C., Mantovani, F. and Gaggioli, A. (2016), ‘Transforming experience: The potential of augmented reality and virtual reality for enhancing personal and clinical change’, Frontiers in Psychiatry, 7: 164.CrossRefGoogle ScholarPubMed
Robinson, N. L., Cottier, T. V. and Kavanagh, D. J. (2019), ‘Psychosocial health interventions by social robots: Systematic review of randomized controlled trials’, Journal of Medical Internet Research, 21(5): e13203.CrossRefGoogle ScholarPubMed
Robinson, N. L., Connolly, J., Hides, L. and Kavanagh, D. J. (2020), ‘Social robots as treatment agents: Pilot randomized controlled trial to deliver a behavior change intervention’, Internet Interventions, 21: 100320.CrossRefGoogle ScholarPubMed
Rodogno, R. (2020), ‘Who's afraid of nudging by social robots?’, in Seibt, J., Hakli, R., and Nørskov, M. (eds), Robophilosophy: Philosophy of, for, and by social robotics. MIT Press.Google Scholar
Rosenberg, R. S., Baughman, S. L. and Bailenson, J. N. (2013), ‘Virtual superheroes: Using superpowers in virtual reality to encourage prosocial behavior’, PLoS One, 8(1): e55003.CrossRefGoogle ScholarPubMed
Roubroeks, M., Ham, J. and Midden, C. (2011), ‘When artificial social agents try to persuade people: The role of social agency on the occurrence of psychological reactance’, International Journal of Social Robotics, 3(2): 155165.CrossRefGoogle Scholar
Ryan, R. M. and Deci, E. L. (2000), ‘Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being’, American Psychologist, 55(1): 6878.CrossRefGoogle ScholarPubMed
Sanders, M., Snijders, V. and Hallsworth, M. (2018), ‘Behavioural science and policy: Where are we now and where are we going?Behavioural Public Policy, 2(2): 144167.CrossRefGoogle Scholar
Scheutz, M. and Arnold, T. (2016), Are we ready for sex robots? in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 351358.CrossRefGoogle Scholar
Seaborn, K. and Fels, D. I. (2015), ‘Gamification in theory and action: A survey’, International Journal of Human-Computer Studies, 74: 1431.CrossRefGoogle Scholar
Seinfeld, S., Arroyo-Palacios, J., Iruretagoyena, G., Hortensius, R., Zapata, L. E., Borland, D., de Gelder, B., Slater, M. and Sanchez-Vives, M. V. (2018), ‘Offenders become the victim in virtual reality: Impact of changing perspective in domestic violence’, Scientific Reports, 8(1): 111.CrossRefGoogle ScholarPubMed
Seo, S. H., Geiskkovitch, D., Nakane, M., King, C. and Young, J. E. (2015), ‘Poor thing! Would you feel sorry for a simulated robot? A comparison of empathy toward a physical and a simulated robot’, in 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 125132.Google Scholar
Sequeira, J. S. (2018), ‘Can social robots make societies more human?Information, 9(12): 295.CrossRefGoogle Scholar
Simonovits, G., Kezdi, G. and Kardos, P. (2018), ‘Seeing the world through the other's eye: An online intervention reducing ethnic prejudice’, American Political Science Review, 112(1): 186193.CrossRefGoogle Scholar
Soler, R. E., Leeks, K. D., Buchanan, L. R., Brownson, R. C., Heath, G. W., Hopkins, D. H. and Task Force on Community Preventive Services (2010), ‘Point-of-decision prompts to increase stair use: A systematic review update’, American Journal of Preventive Medicine, 38(2): S292S300.CrossRefGoogle ScholarPubMed
Song, Y. and Luximon, Y. (2020), ‘Trust in AI agent: A systematic review of facial anthropomorphic trustworthiness for social robot design’, Sensors, 20(18): 5087.CrossRefGoogle Scholar
Spence, A., Poortinga, W. and Pidgeon, N. (2012), ‘The psychological distance of climate change’, Risk Analysis: An International Journal, 32(6): 957972.CrossRefGoogle ScholarPubMed
Stanovich, K. E. and West, R. F. (2000), ‘Individual differences in reasoning: Implications for the rationality debate?Behavioral and Brain Sciences, 23(5): 645665.CrossRefGoogle ScholarPubMed
Statista, I. H. S. (2018), Internet of Things (IoT) connected devices installed base worldwide from 2015 to 2025 (in billions). Retrieved from: https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/.Google Scholar
Strack, F. and Deutsch, R. (2004), ‘Reflective and impulsive determinants of social behavior’, Personality and Social Psychology Review, 8(3): 220247.CrossRefGoogle ScholarPubMed
Sunstein, C. R. (2014), Why nudge?: The politics of libertarian paternalism. New Haven, CT, United States: Yale University Press.Google Scholar
Sunstein, C. R. (2015), ‘Behavioural economics, consumption and environmental protection’, in Reisch, L. A., and Thøgersen, J. (eds), Handbook of research on sustainable consumption. Edward Elgar Publishing.Google Scholar
Sunstein, C. R. (2020), Behavioral science and public policy. Cambridge: Cambridge University Press (Elements in Public Economics).CrossRefGoogle Scholar
Sunstein, C. R. and Thaler, R. H. (2003), Libertarian paternalism is not an oxymoron. The University of Chicago Law Review, 1159–1202.CrossRefGoogle Scholar
Swan, M. (2012), ‘Sensor mania! The internet of things, wearable computing, objective metrics, and the quantified self 2.0’, Journal of Sensor and Actuator Networks, 1(3): 217253.CrossRefGoogle Scholar
Swan, M. (2013), ‘The quantified self: Fundamental disruption in big data science and biological discovery’, Big Data, 1(2): 8599.CrossRefGoogle ScholarPubMed
Thaler, R. H. and Sunstein, C. R. (2003), ‘Libertarian paternalism’, American Economic Review, 93(2): 175179.CrossRefGoogle Scholar
Thaler, R. H. and Sunstein, C. R. (2008), Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT, United States: Yale University Press.Google Scholar
The Behavioural Insights Team (2013), Applying behavioural insights to charitable giving. Cabinet Office. Retrieved from: www.gov.uk/government/publications/applying-behavioural-insights-to-charitable-givingGoogle Scholar
Touré-Tillery, M. and Fishbach, A. (2017), ‘Too far to help: The effect of perceived distance on the expected impact and likelihood of charitable action’, Journal of Personality and Social Psychology, 112(6): 860.CrossRefGoogle ScholarPubMed
Truelove, H. B., Carrico, A. R., Weber, E. U., Raimi, K. T. and Vandenbergh, M. P. (2014), ‘Positive and negative spillover of pro-environmental behavior: An integrative review and theoretical framework’, Global Environmental Change, 29: 127138.CrossRefGoogle Scholar
Tussyadiah, I. and Miller, G. (2019), ‘Nudged by a robot: Responses to agency and feedback’, Annals of Tourism Research, 78: 102752.CrossRefGoogle Scholar
Van Boven, L., Kane, J., McGraw, A. P. and Dale, J. (2010), ‘Feeling close: Emotional intensity reduces perceived psychological distance’, Journal of Personality and Social Psychology, 98(6): 872.CrossRefGoogle ScholarPubMed
Van Der Linden, S. (2015), ‘Intrinsic motivation and pro-environmental behaviour’, Nature Climate Change, 5(7): 612613.CrossRefGoogle Scholar
Verma, I. M. (2014), ‘Editorial expression of concern: Experimental evidence of massivescale emotional contagion through social networks’, Proceedings of the National Academy of Sciences, 201412469.Google ScholarPubMed
Viljoen, S. (2020), Democratic data: A relational theory for data governance. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3727562CrossRefGoogle Scholar
Vlaev, I., King, D., Dolan, P. and Darzi, A. (2016), ‘The theory and practice of “nudging”: Changing health behaviors’, Public Administration Review, 76(4): 550561.CrossRefGoogle Scholar
Wachter, S. (2018), ‘Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR’, Computer Law & Security Review, 34(3): 436449.CrossRefGoogle Scholar
Wansink, B. (2016), Slim by design: Mindless eating solutions for everyday life. London, UK: Hay House, Inc.CrossRefGoogle Scholar
Wolf, G. I. and De Groot, M. (2020), ‘A conceptual framework for personal science’, Frontiers in Computer Science, 2(21): 15.CrossRefGoogle Scholar
Xue, H., Sharma, P. and Wild, F. (2019), ‘User satisfaction in augmented reality-based training using Microsoft HoloLens’, Computers, 8(1): 9.CrossRefGoogle Scholar
Zainuddin, Z., Chu, S. K. W., Shujahat, M. and Perera, C. J. (2020), ‘The impact of gamification on learning and instruction: A systematic review of empirical evidence’, Educational Research Review, 30: 100326.CrossRefGoogle Scholar
Zhang, Y., Zhang, H., Zhang, C. and Li, D. (2020), ‘The impact of self-quantification on consumers’ participation in green consumption activities and behavioral decision-making’, Sustainability, 12(10): 4098.CrossRefGoogle Scholar
Zhao, S. (2006), ‘Humanoid social robots as a medium of communication’, New Media & Society, 8(3): 401419.CrossRefGoogle Scholar
Zimmer, M. (2010), ‘“But the data is already public”: On the ethics of research in Facebook’, Ethics and Information Technology, 12(4): 313325.CrossRefGoogle Scholar
Figure 0

Table 1. The Overview of the New Technologies Covered in the Present Article and their Potential for Behaviour Change