Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-06T14:25:31.564Z Has data issue: false hasContentIssue false

Behavioural Analysis and Regulatory Impact Assessment

Published online by Cambridge University Press:  22 March 2024

James R. Drummond*
Affiliation:
Florence School of Transnational Governance, European University Institute, Florence, Italy
Claudio M. Radaelli
Affiliation:
Florence School of Transnational Governance, European University Institute, Florence, Italy School of Public Policy, UCL, London, UK
*
Corresponding author: James R. Drummond; Email: james.drummond@alumnifellows.eui.eu
Rights & Permissions [Opens in a new window]

Abstract

Regulatory impact assessment (RIA) is an appraisal tool to bring evidence to bear on regulatory decisions. A key property of RIA is that is corrects errors in reasoning by pushing regulators towards deliberative thinking to override intuitive judgments. However, the steps for regulatory analysis suggested by international organisations and governmental handbooks do not handle two sources of bias and barriers that are well documented in the literature on behavioural insights. First, bias enters the process via knowledge production during the analytical process of assessment. Second, bias affects knowledge utilisation when regulators “read” or utilise the results of RIA. We explore these two pathways by focusing on drivers of behaviour rather than lists of biases. The conclusions reflect on the limitations of current practice and its possible improvement, making suggestions for an RIA architecture that is fully informed by behavioural analysis.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

I. Introduction

For decades, the so-called “better regulation” agendaFootnote 1 has spread across the globe. Key in this agenda is the adoption by the civil service (ie mainly executive and independent regulatorsFootnote 2 ) of a toolkit to generate decisions that are informed by evidence.Footnote 3 This includes methodologies to reduce administrative burdens, methods to appraise existing regulations, regulatory targets and the systematic analysis of the likely effects of proposed primary or secondary legislation. This last tool has been developed by the international community and codified in the guidance of international organisations as regulatory impact assessment (RIA). In the social sciences, a large body of literature has looked at the original reasons and mechanisms behind the adoption of RIA, the quality of regulatory analyses and how they are used.Footnote 4

An important rationale for adopting RIA is provided by “behavioural insights” (or BI)Footnote 5 : regulators are impacted by social context and behavioural barriers when appraising regulatory options. Since regulators are human beings, they are prone to errors in reasoning that emanate from a variety of sources, such as bounded rationality,Footnote 6 intuitive judgmentsFootnote 7 and the so-called illusion of control.Footnote 8 RIA can then be justified as a means to encourage regulators to engage in deliberative, evidence-informed thinking.Footnote 9

Our starting point is to acknowledge that this rationale may well be justified, but, we argue, RIA has not yet fully incorporated the lessons from BI. RIA methodologies, guidance and training often assume they are being used by the rational Homo economicus (“econs”), not taking into consideration that, in reality, RIA will be produced and used by Homo sapiens (“humans”)Footnote 10 who will inevitably be subject to social and behavioural limitations. International evidence suggests that this has contributed to an implementation gap as the humans in government find it difficult to use RIA.Footnote 11 We argue for a move from an econs-orientated RIA to a humans-orientated practice of this tool, thus evolving our methods, guidance and training.

We build our argument on an approach that looks at the behavioural issues that humans face when engaging with a RIA. Essentially, we focus on an analysis of behaviours that may result in errors in judgment, rationality and analysis by attending to two categories: the analysts conducting the RIA (this category is concerned with knowledge production); and the usage of RIA by regulators (we call this “knowledge utilisation”). We first define RIA and discuss how its adoption and use are justified by BI. We then develop a BI-informed analysis of RIA as knowledge production and RIA as knowledge utilisation, focusing on the drivers of behaviour rather than a list of biases. We conclude with some suggestions about the research agenda in this field and the policy implications of our analysis.

II. Defining regulatory impact assessment

Since being first introduced by the USA in the early 1980s,Footnote 12 governments have adopted RIA for a variety of reasons connected to the “better regulation” agenda. These range from proving regulatory efficiency, reducing burdens, increasing transparency and accountability in the policymaking process, placing limits on the civil service and trying to promote coherence with long-term government plans.Footnote 13 The academic literature has noted that RIA can also be adopted to signal compliance with international agendas and commitment to rational decision-making without any intention of implementing RIA or using it to support regulatory decisions.Footnote 14 As of 2021, all Organisation for Economic Co-operation and Development (OECD) countries have adopted this tool.Footnote 15

The RIA methodology encourages decision-makers to engage with a step-by-step process to reflect on the logic of policy intervention and how that connects to policy solutions that maximise benefits for society. These steps typically includeFootnote 16 :

  • Problem definition: separate the policy problem into its many drivers.

  • Objective-setting: define the problem baseline and how it will change with the regulatory intervention, then connect it to a cause-and-effect “intervention logic”.

  • Policy options: connect the intervention logic with regulatory and non-regulatory policy options, including a “no intervention” option as a counterfactual and anchor against the rush to regulate.

  • Data collection: understand the problem from different sources, including databases, desk research, empirical methods, stakeholder engagement or models when data are unavailable.

  • Assessing options: analyse the options according to common quantitative and/or qualitative techniques, including cost–benefit analysis (CBA), cost–effectiveness analysis (CEA) and multicriteria analysis (MCA).

  • Policy choice: compare the analyses of options and identify the most preferred, which is then provided (often in a detailed write-up) as advice to the decision-maker.

  • Evaluation and monitoring: identify ways in which the selected policy option will be evaluated and monitored over time.

III. The behavioural insights rationale

As mentioned, there are a variety of public management reform and political rationales for adopting RIA. Applying a broadly conceived behavioural perspective, a key justification for RIA is to correct potential errors in judgment. This has its roots as far back as Niskanen’s argumentFootnote 17 that biases exist in all bureaucratic processes – and by extension, also in the regulatory process. This perspective has been pursued in the literature that is critical of classic public choice theory.Footnote 18 To illustrate, the concept of market failures as a logic for government intervention can be expanded to include behavioural market failures, giving an expanded set of situations for which government intervention can be justified.Footnote 19 Some authors have spoken of this as behavioural public choice theory,Footnote 20 which argues that the public choice solution of deploying economics-based policy tools to justify policy decisions ignores the fact that the policymaking system is created and used by humans and, thus, behavioural agents who are subject to behavioural biases and barriers.Footnote 21

A small subset of scholarship has focused on the question of behavioural issues in, specifically, regulatory policymaking. Dudley and XieFootnote 22 argue that five behavioural issues affect the decisions of regulators: availability, myopia, overconfidence, loss aversion and confirmation bias. Dunlop and RadaelliFootnote 23 examine the “illusion of control” when officers carry out a RIA. This effect leads regulators to underestimate the “do nothing” option. There is also a line of scholarship dedicated to critiquing CBA from a behavioural perspective.Footnote 24

Overall, the core argument from these perspectives is that we need to correct these behavioural issues. Consequently, RIA becomes one way to implement such corrections, with the implicit premise that decisions made via intuitive judgments about policy problems and their solutions are biased and that adopting a deliberative mindset can remove these biases. As, among others, Dudley and Xie and Drummond et alFootnote 25 note, RIA then becomes an analytic and decision-making architecture that encourages deliberative thinking. This raises at least two important questions.

First, what exactly should RIA target, mitigate and correct from a behavioural perspective? What are the errors of judgment to be avoided by RIA as a reasoning and decision-making choice architecture? Here the literature is less clear. Both Dudley and Xie and Dunlop and Radaelli select their biases seemingly at random. They do not justify their selection out of a methodology for analysing the problem or an explanatory model that would provide reasons as to why these biases, against all others, apply to the issues observed in regulatory appraisal decisions. This critique is not unique to these authors. It is linked to the larger issue in applied behavioural science related to the lack of theories as coherent and practical frameworks to generate testable hypotheses and good explanations for findings.Footnote 26 The literature still relies on generic metaphors serving as high-level theories or a “jumbled collection of heuristics and biases, with little in the middle to draw both levels together”.Footnote 27 This “missing middle” includes, at least, the need to follow a systematic approach that logically links perceived behavioural problems with a rigorous analysis to generate solutions to solving these problems,Footnote 28 much like the initial stages of a RIA, as well as engaging with practical theories that are “geared towards realistic adaptation by practitioners”Footnote 29 by being based on real-life data, testable hypotheses and the identification of limits.Footnote 30

The problem with a focus on biases is that, on their own, they can only usefully serve as a description of a behaviour, but they do not explain how these biases produce an output behaviour.Footnote 31 This is because any given bias will be subject to limits, moderators, conditions and side effects that mediate its effects, even potentially to the point of having little to no effect in some situations.Footnote 32 This chimes with critiques levelled at behavioural science, notably the proliferation of biases, often discovered via isolated and non-replicated experiments on college students from WEIRD backgrounds,Footnote 33 as well as the issue of the “bias bias” that fails to recognise the variance in biases.Footnote 34

Second, are deliberative decisions truly less biased? RIA as a corrective device, as far as it is described in the official guidance of governments and international organisations, assumes a rational ‘econ’ goes through the various steps from problem definition to the identification of the best option, without getting trapped in errors of judgment. Although RIA can be seen as a tool to force deliberative thinking, this on its own does not ensure that errors are removed from the analytical or decision-making processes. As such, RIA fails to properly endogenise the lessons from BI.

Bringing the behavioural public choice perspective in,Footnote 35 we argue that RIA’s effectiveness as a behavioural corrective device depends on behavioural issues brought by humans that can appear in two stages. One is the process during which the analyst(s) carries out the RIA (we call this process “knowledge production”). The other process involves decision-makers who make their own judgments about a RIA’s findings and their implications for the final regulatory choices (we call this process “knowledge utilisation”). In both processes, we need to take the lessons of BI seriously. Our contribution to the literature is to endogenise these lessons. We will do that not by picking biases ad hoc but by working with a structured process for identifying drivers of behaviour that can be linked to biases and, ultimately, solutions. By doing so, we will also respond to the critique noted above about the “missing middle” analysis by focusing our analysis on establishing this logical and practically orientated link between the potential for behavioural biases and barriers to impact the effectiveness of RIA and solutions that could be generated, empirically tested and, if they work, scaled up.

IV. Modelling behaviour in regulatory impact assessment

Let us then look for a structured diagnostic that goes beyond presenting a list of biases. Specifically, we are searching for a set of mechanisms affecting decision-making that enable us to identify how an input variable is translated into an output behaviour.Footnote 36

The model being adopted here comes from previous research, adapted by the OECD in a “toolkit” to help policymakers apply BI in their daily practice, which proposes four central drivers of behaviour (which the model calls “behavioural aspects”) that commonly impact decision-making, represented by the acronym “ABCD”Footnote 37 :

  1. (1) Attention: humans have an inability to focus on everything, focusing instead on what is most important given one’s own knowledge and preferences.

  2. (2) Belief formation: humans form their own beliefs and make judgments by making sense of the world through creating a worldview based on our preconceptions.

  3. (3) Choice: preference formation is influenced by the choice architecture of the decision, including how the choices are framed as well as the pre-existing preferences and social influences of the person, such as the socialisation in the team or organisation where the individual is employed.

  4. (4) Determination: humans need to stick to their decisions over time. Yet, this can be challenging in the context of bounded willpower, bounded self-regulation or bounded self-control.

The model notes that these behavioural aspects are derived from broader psychological theories that attempt to explain various behavioural problems. It adds that a “behavioural problem may be caused by several factors within one aspect as well as by factors from several aspects”.Footnote 38 Put simply, any given behavioural bias or barrier could result from one or many of the ABCDs above. Identifying when, how and to what degree such drivers result in any given behavioural issue is a way to approach resolving this “missing middle” issue.

The ABCD model enables this via a structured diagnostic approach that does not presuppose that a bias exists as part of the broader problem or where the bias fits analytically. Rather, the model encourages stepping back and seeks a logical connection between a behavioural problem and a solution that is rooted in an analysis of behavioural drivers and their effects. In our context, the behavioural problems are the errors in judgment brought by knowledge producers and utilisers that are not effectively resolved by RIA. Thus, in applying the model, we look at input variables as being the course of action or steps necessary to produce or utilise RIA that, when filtered through the behavioural drivers (ABCD), could result in suboptimal output decisions that reduce the effectiveness of RIA as a behavioural corrective tool. In turn, if such behavioural drivers could be resolved, then RIA’s effectiveness could be increased.

There are several merits to deploying this model. First, it is part of a structured process for understanding, analysing and developing implementable solutions to behavioural problems. These problems are well understood in the global BI community, with many units following similar processes that may have individual variations but are broadly in alignment with each other. Second, it then helps to resolve the issue of randomly selecting biases or relying on untested theories by focusing on developing a practical theory and understanding regarding the behavioural problem. Finally, it opens many opportunities for empirical testing. Such structured processes require solutions to be tested empirically before scaling up to implementation, but they also provide the possibility of empirical testing to “check” the analysis before moving towards solutions. Diagnostic experiments are becoming an increasingly popular way to apply BI iterativelyFootnote 39 – rather than a single experimental approach, often via randomised controlled trials, which is not well suited to understanding broader and more complex issues.Footnote 40 In fact, such diagnostic experiments can help to isolate behavioural issues for which more robust and narrowly focused experimental methods can be deployed effectively in the solutions stage.Footnote 41

However, applying this model brings up important limitations. First, it allows for a robust analysis of the humans (as opposed to econs) who are engaging with RIA but fails to fully integrate the social and political context. In practice, the analytical steps are carried out by policy teams and utilised by decision-makers supported by a team, with their socialisation patterns and organisational norms. Thus, when talking about the individual, we will refer to an individual in an organisational setting, although only in empirical cases can one pin down exactly the contours of this setting. Moreover, RIA is not produced in a vacuum – it is influenced by contextual factors such as political priorities, interference in policy processes and institutional path dependencies. A second limitation is that we are applying this model without having the data to consider the effects of these behaviours in practice or over time, such as how they may be overridden by processes of learning about tools and methodologiesFootnote 42 or rather reinforce each other in a spiral of not learning from evidence.

To some extent, the first limitation is included in the analysis but is filtered through the individual perspective and not considered on its own. This places important limits that will be revisited in Section VI. It has also been the subject of recent research by others who applied this form of behavioural analysis to government more broadly.Footnote 43 The second limitation is also addressed in Section VI, where we propose ways to operationalise this framework into solutions that should be further investigated and tested.

However, we believe that applying this model as we have is beneficial for two reasons. First, it answers a call to understand what drives errors in policymaking and applies the model to a practical test case that represents two comparatively under-researched fields for behavioural science: government decision-making and regulatory policymaking tools. Second, as will be seen, applying the model to individuals in RIA is already complex, and adding the additional complexity of a full consideration of the social and political context would exceed the word budget of such an article as this. This work needs to be iterative – test the model to see if it works and iterate it, both as it applies to individuals and through dedicated research on social and political context.

V. Applying the ABCD model

In this section we follow the model developed above and present a sketch of the behavioural analysis of knowledge production and utilisation in RIA. The two processes are intertwined, as is acknowledged by the literature, but can be separated analytically.Footnote 44

We assume that those engaged in knowledge production and utilisation are subject, ceteris paribus,Footnote 45 to behavioural barriers that can affect their judgments. We will refer to the “analyst” in reference to the person(s) engaged in knowledge production, such as those in civil service units or expert advisors, and to the “decision-maker” in reference to those involved in knowledge utilisation, such as a minister or the head of a regulatory agency.

1. Knowledge production

For knowledge production, we consider four main clusters of activities reflecting the steps of a RIA (see Section II): agenda-setting, logical analysis, data analysis and what we call “post-RIA”. Starting with agenda-setting, this is the stage when a government sets the scene and administrative procedure for RIAs to be carried out. This step includes the design of competences and organisational tasks for who carries out the RIA (the department or specialised teams within one or more departments or in central economic analysis units) and, when present, who will exercise scrutiny over the quality of the assessments. In some countries, depending on their legal tradition, this task can go as far as pinning down the tasks in laws about RIA (like in Italy); in others, executive orders or even cabinet conventions may be present (like in the USA and the UK, respectively). Let us now turn to our behavioural drivers.

Focusing on attention, an analyst working on producing policy advice is often time-poor and having to follow many policy processes or issues at once. Here, switching costs or distractions limit the ability to focus and allocate attention to analytic tasks, including consultation, until the finalisation of the RIA becomes an urgent issue. Last-minute RIAs will therefore be produced, limiting the potential of the analysis to challenge priors and support high-quality decisions. The problem can be aggravated by multitasking, which reduces the ability to search for all information necessary before beginning a RIA and to cognitively think through the steps. In addition, the complexity of analytical requirements may lead to overload – a sort of paralysis by analysis. An increasing trend in RIA is to add tests that look beyond traditional economic impacts and include a wide array of analytical categories, such as climate impacts, equity and distributional effects.Footnote 46 This process of layering on tests aggravates overload, as the skillsets required to do these types of analyses becomes harder to find in one person or small group of analysts, which may require horizontal cooperation and ad hoc team building.

Other effects stem from mechanisms of belief formation and choice. On the former, the analyst may come into an appraisal with strong pre-existing beliefs, which form priors that may lead to information being ignored – and the absence of which may lead to intuitive conclusions that are inconsistent with a truly evidence-based approach. On the latter, choice may be influenced by presuppositions or prejudice about the appraisal and its value in decision-making: the personal view of the analyst might frame RIA in ways that disregard dispassionate evidence-based analysis.

This may connect with barriers associated with determination. Motivated reasoning may lead some to conclude that RIA is not necessary or to assume that the policy problems will either solve themselves or are not particularly severe. There will also be no effort in exploration if there is a feeling that the conclusions of a RIA will go against the political priorities of the government. Moreover, issues of procrastination may affect the forthcoming RIA. Each policy problem is unique, and a proper RIA requires data, information and knowledge to be fed into the process. Although institutions and the people within them may realise that better data are needed, they might not do the work required to systematise and organise such information as other “urgent” matters become more pressing. As a result, the quality of data analysis may be hobbled from the beginning.

The second group of activities revolve around the analysis proper. In this group we include defining the problem, setting the objectives and identifying policy options that will then be analysed. One important goal in these stages is to establish the intervention logic between a policy problem and potential solutions. Essentially, the quality of these steps is determined by the quality of logical reasoning. Errors in judgment may create non-trivial behavioural effects in this reasoning.

Issues related to attention include overlooking. In problem definition, the analyst may not see the problem in its entirety or may lack key details of the problem. This could then continue when selecting policy options as the analyst may only focus on a subset of options that are easily available to the mind or fit with the narrowly defined problem. Analysts may also resort to analogising the problem, wrongly diagnosing it as similar to other problems at hand, which can result in anchoring the problem definition in a way that leads to omissions that could be important to the logical analysis. The analyst may also be distracted by the short timeframe or need to switch between multiple tasks, affecting the entire logical analysis. Finally, specific to objective-setting, the analyst may have uncertainties Footnote 47 about who is the target for the policy intervention, how the policy will improve the policy problem or under what time period success can be achieved. This may be compounded by issues being complex or long term, reducing any one institution’s ability to change it.

As per belief formation, at the problem definition stage, the analyst may be susceptible to over-or under-confidence in their or their institution’s ability to solve the policy problem. This may lead to issues related to the ability to retrieve and utilise information to build a problem definition. A preference for abstraction may lead to assessing risks or dealing with the problem definition via abstract concepts rather than concrete knowledge of the policy issue or using mental shortcuts to make intuitive judgments about the problem. Within objective-setting, this risk response may be augmented by sampling errors that lead to either neglect of or altogether not having the necessary baseline information needed to set objectives correctly. Considering policy options, pre-existing beliefs in the efficacy of certain types of policy options from either the institutional or the personal level may lead to them receiving preferential treatment by contrast with a truly neutral evaluation of policy options.

Choice will likely have a large impact on how to define the problem and connect that to policy options. This is all fundamentally related to the choice architecture under which the analyst is working. There will inevitably be framing effects, whereby the policy issue will be filtered through what has been learned via socialisation, the organisation’s strategic priorities and/or the priorities of the current government. This will influence the way in which the problem is defined and what preferences are given to objectives (ie relatively achievable compromises, extreme options or based on established defaults from previous policies) and place limits on the set of policy options to either what is commonly used by the organisation or what is defined by the government of the day. Social norms have intervening effects, namely on how organisational socialisation, the culture and the practices of the team or government impact what is treated as part of the problem and what is not.

Another issue related to choice is the illusion of control. Ultimately, a government cannot solve every aspect of every problem – some important social and economic variables are ultimately not affected by government policies, or the effect of the policy is small compared to the effect of other variables such as culture and tradition. However, an analyst may wrongly misdiagnose an issue as solvable via policy action. The consequence will be overconfidence, overestimating the impact that the preferred regulatory option will have and including policy options that may be ultimately irrelevant to the dynamics of the problem over time. Choice may also be influenced by anchoring effects, which can create unintended consequences when selecting options. For example, if a regulator has a strong but biased sense of its mandate, options anchored in this belief may be chosen.

Determination may have some effects related to feedback and cognitive dissonance. On the former, organisations often operate according to inertia created prior to the RIA process beginning. This includes feedback from previous attempts to advocate for certain objectives and policy options that may have been negatively received, creating the organisational culture of path dependency that such objectives and options are “no-go” areas. Furthermore, the analyst may feel some discomfort with setting the long-term objectives of a policy and rely more on low-hanging fruit and short-term fixes that provide more immediate gratification for decision-makers, even if longer-term objectives and options are best suited to solving the policy problem.

In the data analysis cluster, attention is likely to have key impacts. First, overlooking may lead to some data being omitted from the analysis. Strong stakeholders who want their voices heard in regulatory choice can lead to an unjustified allocation of attention. This may be engrained in the organisational modus operandi via the well-known phenomenon of regulatory capture. At the other end of the spectrum, no attention may be allocated to missing stakeholders such as prisoners or children, as shown by the SILE project that is underway in Finland.Footnote 48 Expert voices or so-called sophisticated stakeholdersFootnote 49 may be treated like all others, biasing their impact on the data and evidence collected by the analyst. The overall effect is to skew and narrow down the data and evidence considered. Time-intensive efforts for getting new data can cause some frustrations, leading the analyst to only carry out data analysis where data most readily exist. Government analysts using consultants may act upon the bias created by black box attitudes – wrongly allocating decision-making attention to complex analyses and their results rather than the assumptions upon which the findings were obtained.Footnote 50 This risk increases with more complex models and long chains of causation.

We reason that belief formation is likely to have a significant impact on the data analysis cluster of tasks. Analysts will often rely on pre-existing beliefs to process information. For data collection, rather than searching for all relevant information, the analyst may eliminate or focus on certain data or stakeholders, including those that align with their own personal beliefs or what they view as the priorities of their organisation or government. This is likely to dovetail with deciding how to analyse the data collected, whereby pre-existing beliefs in certain analytical models over others (such as CBA over CEA) will have sorting effects, leading to overconfidence in the ability for analysts to determine the optimal policy solution. This is probably impacted by team construction, such as policy teams with a dominance of certain skills (eg economics) or with a leadership that believes in the primacy of certain skills.

Low-quality data may produce a false impression of robust analysis, whereby data inputted into the models being used are not reliable enough for decision-making. Relatedly, analysts may encounter issues of abstraction, whereby issues of higher abstraction are more attractive to some minds. An analyst may prefer “thinking big” instead of “thinking precisely”, and the data analysis may produce inconclusive findings, such as confidence intervals that are large and overlap across options. Finally, the analyst may overly rely on intuitive judgments, especially in areas where data patterns are non-linear, leading to mental shortcuts or assumptions that do not reflect the reality of the policy problem.

Data analysis is probably also heavily influenced by issues of choice. Much like logical analysis, the choice architecture within which the analyst works has effects on the actions taken here. With data collection, the choice of question posed will have impacts on the information received. In some cases, the analyst will be using pre-existing datasets for which they may not know the questions originally asked to create the data. In others, their institution may be collecting the data, such as through stakeholder engagement, which may be subject to the choice architecture of the existing methods of collecting such data in a government or agencyFootnote 51 and, more broadly, might hinder the uptake of data that are more sophisticated or narrative in form compared to the rigidity of the existing format. When analysing the options, the order in which options are presented may affect how they are analysed and how decision-makers come to understand which option is best. This may lead some to rely on visualisations or “killer charts”, although these may lead to wrong conclusions being drawn as much as they help if they are improperly constructed.Footnote 52

Another issue of choice arises out of the possible overload in the form of thick guidance documents with many methods and tests. For an analyst not trained in the use of such methods (or an already very busy analyst), this might represent a large point of friction, leading to the use of shortcuts or the improper use of these methods. The analyst may also lack context for the data being used, such as a lack of understanding or consideration of the social, administrative and political context in which the data apply, leading to unrealistic or wrong conclusions. A classic mistake is to make errors when modelling implementation,Footnote 53 such as assuming high compliance rates or not looking for data on how an option will be delivered.

Finally, given the level of rigour and complexity needed to successfully complete the steps of data analysis, let us look at issues relating to determination. The RIA process is subject to high levels of friction, with the need to find, collect, process and utilise data from a variety of sources and to apply those data in sometimes complex analytical models. Friction may also be present from within the regulatory institution (department or independent agency), as data may be held by other public entities, including in non-transferrable formats, and it may be against operational norms to reach out to “missed” stakeholders to obtain the information needed. In these conditions, a probable effect is mental exhaustion, leading to a reduced bandwidth to do the many types of calculations necessary and so producing less efficient and more distracted analyses. Equally probable is procrastination, with analysts postponing the “hard” elements of data analysis and thus being subject to limiting factors. Issues related to biased feedback appear when lessons have not been transferred from previous attempts to conduct RIA that could help improve data analysis and the selection of the optimal policy solution.

We finally examine the cluster of tasks and activities concerned with the “post-RIA” stage from the perspective of the analyst. Imagine this: a document called RIA is on the table of the regulator. A process has finished. But has it really? Analysts may be thinking that the job is done, meaning that attention might not be allocated properly to monitoring and evaluating the regulatory choice and the evolution of the logic of the intervention over time. After all, these issues often fall to other departments and bodies: design and delivery are frequently the domains of different teams and organisations, such as line ministries/departments versus independent regulators. The analyst may then see this as “not their job”, relegating the delivery tasks when instead they could help support this process with advice based on their knowledge gained whilst conducting the RIA. Similarly, the analysts may run into issues of multitasking if they are trying to think about both design and delivery, leaving them unable to focus on what they know best in the moment – the policy design. Distractions in the form of needing to switch to other pressing tasks may also result in suboptimal levels of attention being paid to policy implementation.

Issues related to belief formation can lead the analyst to see the design of a new regulation as signalling that the government is acting in relation to a pressing policy issue. This signalling function may depreciate the value (for the analyst) of thinking hard about implementation, monitoring and evaluation plans.

The choice architecture and political context of regulatory delivery is not neutral. If there is a diffuse perception in public administration that politicians and regulators do not pay attention to analysis, this will lead the analysts not to develop arguments and evidence about RIA implementation and evaluation. Furthermore, future discounting will probably appear, as the decisions post-RIA will impact other teams, institutions or levels of government that are not within the analyst’s immediate sphere of influence.

Finally, issues with determination may lead analysts to focus less on actions related to delivery due to mental exhaustion. Considerable time has already been invested in the identification and analysis of options, leaving less time and mental appetite to stay focused on how the regulation will be monitored and evaluated.

2. Knowledge utilisation

We now turn to how RIA is used, which is where political considerations matter the most. Knowledge about problems, feasible options and impacts will be considered by regulators, politicians and other actors who are clearly motivated and sensitive to political considerations.Footnote 54 Knowledge utilisation also depends on the mediating variable of regulatory oversight that, when robust, will control the quality of RIAs before they reach the table of decision-makers. Even in these cases, however, it is perfectly possible that the decision-maker will ignore the advice of a high-quality RIA. Here, to follow coherently our argument, we briefly examine this dimension according to our model.

Attention is obviously a driver of utilisation, as decision-makers must prioritise attention in a political world that is excessively information rich.Footnote 55 Even if there is nothing politically uncomfortable about the conclusions of a RIA, decision-makers must take countless decisions on any given day with a plethora of information provided; hence, it is easy to overlook important information contained within a RIA that could make a difference in terms of the quality of the decision. Distractions and multitasking are common sources of RIA neglect. This is because the utility function of decision-makers does not respond to the need to learn in depth about a problem: political actors do not necessarily seek truth – they seek popularity, reputation and power.Footnote 56 If a RIA is presented in arcane language, without a well-written executive summary or clear implications for regulatory decisions, these issues will be exacerbated.

As for belief formation, decision-makers are naturally more sensitive to political variables than analysts. It is fair to assume that they will very likely be holding onto pre-existing beliefs, even political ideologies, about how problems should be solved, emanating from their position as political elites who are in their position by campaigning and winning elections that present a worldview to the electorate. In many cases, a well-done RIA can demonstrate that such pre-existing beliefs are suboptimal for solving the policy problem, which presents a challenge to the belief structure of the decision-maker. This may spark issues related to confirmation bias or even claims of political motivation on behalf of the civil service, leading to the findings about impacts and options being dismissed. Even if the decision-makers accept the challenge to their priors, there are issues generated by group effects in cabinet and/or parliamentary parties. This may be exacerbated by overconfidence in what political choices can achieve. The wrong choice of experts – for example, taken from think tanks and organisations close to the incumbent party – will cement beliefs that cannot be changed by the evidence. Despite RIA being, in principle, a tool for eliminating intuitive judgments, mental shortcuts and ideology will trump evidence.

Policy choices are likely to be impacted as decision-makers need to show that something (about a policy problem) is being done by the government.Footnote 57 A RIA option framed as non-intervention or contrary to established political preferences may easily be disregarded. Outcomes framed as losses may be filtered via loss aversion, whereas outcomes framed as gains may chime with the “need to do something”. There could also be effects related to the choice order, such as anchoring off the first option presented in the RIA, compromise or extreme selections based on the complexity of the choices presented.

Finally, even decision-makers who are ultimately willing to use a RIA to improve decision-making will experience issues related to determination in doing so. One issue is cognitive dissonance when faced with an analysis that challenges pre-existing beliefs, leading to motivated reasoning in the selection of the preferred policy option over that suggested by the RIA. In the context of overwhelmed decision-makers, mental taxation or exhaustion may appear, affecting the propensity to investigate the RIA for solutions to the problems on the political agenda. Governments also work according to inertia Footnote 58 and amnesia,Footnote 59 two factors that hinder knowledge utilisation.

VI. Discussion and conclusions

At this point, after applying the behavioural drivers as a conceptual framework for our analysis, one may conclude that RIA is simply an impossible and unrealistic task. Yet, we believe that we have demonstrated that the only impossible and unrealistic assumption is the one about econs that permeates RIA guidance and training. If we take BI seriously and look at humans, as has been done in some exploratory work by OECD officers,Footnote 60 solutions are in sight.

To begin, our analysis can be used by regulatory oversight bodies who assist agencies and departments in learning about RIA’s effectiveness.Footnote 61 This also aligns within a broader discussion being held within international organisationsFootnote 62 and the academic communityFootnote 63 about the need to critically revisit the “better regulation” agenda more broadly, with a focus on addressing implementation gaps in the toolbox and developing what the OECD has coined as “regulatory policy 2.0”.Footnote 64

Two elements will need further attention: first, all of the specific points we made about behavioural drivers require a deeper empirical understandingFootnote 65 on their own before designing solutions for a given country or sector. Second, revisiting the limitations noted above, we need a dedicated analysis of the organisational, social and political context. RIA in government is produced and utilised within an institutional context with effects that are top down (ie instructions from decision-makers to study certain policy issues and in certain ways), bottom up (ie how advice from the civil service is transmitted, received and interpreted by decision-makers) and cross-cutting (ie external influences on or even capture of decision-makers and institutions). There is a developing (although not regulatory policymaking in focus) literature to draw upon on the salience of the political dimension of policymaking, including the European Commission and Behavioural Insights TeamFootnote 66 as well as recent work by LinosFootnote 67 and Hirsch and Wong-Parodi,Footnote 68 which would be a useful starting point to broaden our analysis.

Looking at each driver, for attention, solutions are grounded in two realities: we live in a world that is information rich but we lack the time and ability to process everything all at once. Humans involved in policymaking will very reasonably be distracted, overloaded and overlook seemingly important information. Solutions for RIA production and utilisation would need to focus on ways to focus the analyst’s or decision-maker’s attention on the right information about the problem and not just what is expedient, personal or attractive. Given the availability of prior research in this area, an easy starting point may be to look at prompts, defaults or other oversight mechanisms. Simplifying how information is presented (ie visualisations, charts or short summariesFootnote 69 ) and experiments in classFootnote 70 have the potential to redirect attention. Training should be informed by regulatory humility rather than assuming that the public manager is a heroFootnote 71 capable of overcoming the cognitive errors of humans.

Belief formation requires us to accept that humans both are products of their own environment and are influenced by the environment around them, even when doing policy analysis. Our beliefs are naturally applied to solving problems via intuitive judgments and mental shortcuts, especially in times of uncertainty (a normal condition for policymaking). RIA is fundamentally a method for challenging assumptions, but many officers fall into the trap of automatically repeating the steps in the RIA process. Encouraging analysts (and decision-makers as well) to take a more Bayesian approach to questioning priors and widening their peripheral vision should be part of all background training courses on evidence-based policy. Think of decision aidesFootnote 72 as well as guiding search through tools such as checklistsFootnote 73 or decision trees that remind people to reflect on the steps in their causal reasoning. The presence of think tanks, diverse expertise centres in society and advocacy organisations that challenge the government’s numbers and priors makes the environment more pluralistic and therefore could help to mitigate this problem.

Influences on choice include organisational mandates or institutional priorities, as well as the RIA guidance being used and the (in)availability of data. Research on the illusion of control highlights the benefits of framing devices that could be adopted in RIA guidance, such as vignettes, simple tests and exemplars taken from the world of practice. Guidance and training courses should also show how technical, expert-based knowledge should be blended with practical knowledge. Indeed, technical expertise can become an impenetrable wall of prefabricated choices if it does not leave the door open to the lessons of practical experience. Expertise and science are real and distinct forms of knowledge – a point that has been debated at length in science and technology studies (STS).Footnote 74 But the same STS literature shows how problems need various types of knowledge to be addressed and solved.

More broadly, there are plenty of examples from the literature that show that how issues are framed can make a difference, such as changing the framing from a negative problem to be solved to a positive opportunity to make an impact. Solutions could also explore how presenting RIA in different ways affects decision-making, such as moving away from the binary cost–benefit framework that may evoke loss aversion in both the analyst and decision-maker.

Finally, we consider determination. As with attention, humans in government are overly busy, which could lead to prioritisation, procrastination and mental exhaustion, limiting their ability to “stick with RIA” and rendering them subject to frictions that can make completing a RIA seem like a daunting task. Guidance and training for RIA should work with these drivers, showing how to achieve RIA in a context in which there are so many ways to fail to do so. Often-used devices such as champions, success stories and commitment devices may offer some solutions. Alternatively, addressing the most time- and energy-consuming analytical aspects of RIA could help reduce the complexity of a RIA and, thus, the energy needed to complete one. This can be done by insisting on the proportionality of the analysis and concentrating major analyses in those RIAs that support the most important regulatory innovations of a government.Footnote 75 In turn, this could generate more political and public interest for what otherwise would be, for many, the mysterious object of “impact assessment”.

Moving from policy recommendations to our contribution to the literature, we have sought to understand the behavioural drivers of RIA knowledge production and utilisation from a novel perspective rather than working through lists of biases. This has unveiled the lack of realistic assumptions about how humans use RIA in practice, as well as the gap between this practice and how it is taught and described in official guidance. Our analysis in terms of drivers sheds light on why RIA often does not work as a corrective to errors of judgment and decision-making, even if theoretically its rationale lies exactly in this error-correcting function, hence our call for endogenising BI into RIA design, guidance, training and implementation.

Our contribution is complementary (not alternative) to the literature on the effects of political and administrative context on RIA production, implementation and utilisation.Footnote 76 Although this literature looks at macro-variables such as the characteristics of the political system, economic resources, pressure groups and bureaucratic efficiency, we have grounded our approach in BI, taking into consideration how individuals behave in the real world. In doing so, we have followed and extended the work of Drummond et al.Footnote 77 Future research could usefully combine these approaches. We have of course sketched our actors in abstract ways in talking about the analyst and the decision-maker, whereas real-life use of RIA is by teams, groups and organisations in turn affected by social and political context. On the one hand, individuals within government will surely be influenced by institutional and group dynamics, which need to be better understood. On the other, organisations are made up of individuals who can be influenced via the policies and procedures within them, especially when applied to the right people.Footnote 78 Hence, future research should connect our micro-perspective to a meso-organisational and macro-analysis to achieve a more holistic understanding of the problem and its potential solutions.

Acknowledgments

James R. Drummond wishes to thank the Policy Leaders Fellowship programme of the European University Institute, School of Transnational Governance. Claudio Radaelli acknowledges the support of the project Procedural Tools for Effective Governance (Protego), funded by the European Research Council, grant number 694632. Both authors are grateful to the reviewers, as well as to Anna Pietikäinen, Daniel Trnka and Richard Alcorn from the OECD, who commented on early drafts.

Competing interests

The authors declare none.

References

1 There is no universal definition of the “better regulation” agenda, but the key elements can be seen with the European Commission, “Objectives of the Better Regulation agenda” <https://commission.europa.eu/law/law-making-process/planning-and-proposing-law/better-regulation_en#:∼:text=The%20Better%20Regulation%20agenda%20ensures,those%20that%20may%20be%20affected> (last accessed 2 August 2023). For a critical overview of the concept, see CM Radaelli, “Occupy the Semantic Space! Opening Up the Language of Better Regulation” (2023) 30 Journal of European Public Policy 1860.

2 Note that in some systems, such as in Europe, better regulation is sometimes used by the legislative branch.

3 Modernizing Regulatory Review, Executive Order 14094 of 6 April 2023; European Commission, “Better Regulation Toolbox” (2023) <https://commission.europa.eu/law/law-making-process/planning-and-proposing-law/better-regulation/better-regulation-guidelines-and-toolbox/better-regulation-toolbox_en> (last accessed 8 August 2023); OECD, Regulatory Policy Outlook 2021 (Paris, OECD Publishing 2021). OECD, Guiding Principles for Regulatory Quality and Performance (Paris, OECD Publishing 2008). F Simonelli and N Iacob, “Can we better the European Union better regulation agenda?” (2021) 12(4) European Journal of Risk and Regulation 849.

4 RW Hahn, “Reforming Regulation with an Eye toward Equity” (2023) 380 Science 899; RW Hahn and PC Tetlock, “Has Economic Analysis Improved Regulatory Decisions?” (2008) 22 Journal of Economic Perspectives 67; CA Dunlop and CM Radaelli, “Better Regulation in the European Union” in M Maggetti, F Di Mascio and A Natalini (eds), Handbook of Regulatory Authorities (Cheltenham, Edward Elgar 2022) p 302; A Bunea and R Ibenskas, “Unveiling Patterns of Contestation over Better Regulation Reforms in the European Union” (2017) 95(3) Public Administration 589; O Fritsch, CM Radaelli, L Schrefler and A Renda, “Comparing the Content of Regulatory Impact Assessments in the UK and the EU” (2013) 33 Public Money & Management 445; G Listorti, EB Ferrari, S Acs, G Munda, E Rosenbaum, P Paruolo and P Smits, “The Debate on the EU Better Regulation Agenda: A Literature Review” JRC Science for Policy Report (Luxembourg, Publications Office of the European Union 2019).

5 The term “behavioural insights” was coined by the Government of the United Kingdom when creating the Behavioural Insights Team (see D Halpern, Inside the Nudge Unit: How Small Changes Can Make a Big Difference (London, W.H. Allen 2016)). It has come to represent the field of behavioural science as applied to public policy.

6 HA Simon, Models of Man: Social and Rational (New York, Wiley 1957).

7 D Kahneman, Thinking Fast and Slow (1st edition, New York, Farrar, Straus and Giroux, 2013).

8 CA Dunlop and CM Radaelli, “Overcoming Illusions of Control: How to Nudge and Teach Regulatory Humility” in A Alemanno and A-L Sibony (eds), Nudge and the Law (Oxford, Hart Publishing 2015).

9 SE Dudley and Z Xie, “Designing a Choice Architecture for Regulators” (2020a) 80 Public Administration Review 151; SE Dudley and Z Xie, “Nudging the nudger: towards a choice architecture for regulators” (2020b) 16 Regulation and Governance 261.

10 This is inspired by RH Thaler, “From Homo economicus to Homo sapiens” (2000) 14 Journal of Economic Perspectives 133; RH Thaler and CR Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness (London, Penguin Books 2009, updated 2021).

11 OECD, 2021, supra, note 3; OECD, Supporting Regulatory Reform in Southeast Asia (Paris, OECD Publishing 2022).

12 OECD, Regulatory Policy in Perspective: A Reader’s Companion to the OECD Regulatory Policy Outlook 2015 (Paris, OECD Publishing 2015).

13 F De Francesco, “Diffusion of Regulatory Impact Analysis in OECD and EU Member States” (2012) 45 Comparative Political Studies 1277; CM Radaelli and ACM Meuwese, “Better Regulation in Europe: Between Public Management and Regulatory Reform” (2009) 87 Public Administration 639; CM Radaelli, “Diffusion without Convergence: How Political Context Shapes the Adoption of Regulatory Impact Assessment” (2005) 12 Journal of European Public Policy 924.

14 CM Radaelli, “Rationality, Power, Management and Symbols: Four Images of Regulatory Impact Assessment” (2010) 33 Scandinavian Political Studies 164.

15 OECD, 2021, supra, note 3, 72–73, figure 2.9.

16 OECD 2015, supra, note 12; OECD, Best Practice Principles for Regulatory Policy: Regulatory Impact Assessments (Paris, OECD Publishing, 2020).

17 WA Niskanen Jr, Bureaucracy and Representative Government (1st edition, New York, Routledge 1971).

18 G Tullock, A Sheldon and GL Brady, Government Failure: A Primer in Public Choice (San Francisco, CA, Cato Institute 2002); T Gayer and WK Viscusi, “Behavioural Public Choice: The Behavioural Paradox of Government Policy” (2015) 38 Harvard Journal of Law and Public Policy 973.

19 WJ Congdon, JR Kling and S Mullainathan, Policy and Choice: Public Finance through the Lens of Behavioural Economics (Washington, DC, Brookings Institute Press 2011).

20 Gayer and Viscusi, supra, note 18; GM Lucas and S Tasic, “Behavioral Public Choice and the Law” (2015) 118 West Virginia Law Review 199.

21 Viscusi and Gayer, supra, note 18; Lucas and Tasic, supra, note 20.

22 Dudley and Xie, 2020a, 2020b, supra, note 9.

23 Dunlop and Radaelli, supra, note 8.

24 CR Sunstein, “Cognition and cost–benefit analysis” (2000) 29 Journal of Legal Studies 1059; CR Sunstein, The Cost–Benefit Revolution (Cambridge, MA, MIT Press 2019); JC Font, “Behavioural Welfare Economics: Does ‘Behavioural Optimality’ Matter?” (2011) 57 CESifo Economic Studies 551; N Hanley and JF Shogren, “Is cost–benefit analysis anomaly-proof?” (2005) 32 Environmental and Resource Economics 13; JL Knetsch, “Behavioural effects and cost–benefit analysis: lessons from behavioural economics” in E Quah and R Toh (eds), Cost–Benefit Analysis: Cases and Materials (London, Routledge 2012); B Schwartz, “The limits of cost–benefit calculation: commentary on Bennis, Medin, & Bartels” (2010) 5 Perspectives on Psychological Science 203.

25 Dudley and Xie, 2020a, 2020b, supra, note 9; J Drummond, D Shephard and D Trnka, “Behavioral insights and regulatory governance: opportunities and challenges” (2021) OECD Regulatory Policy Working Papers 1.

26 This is summarised in M Hallsworth, “A manifesto for applying behavioural science” (2023) 7 Nature Human Behaviour 315.

27 ibid, 315, whilst also being discussed in opinion pieces such as J Collins, “We don’t have a hundred biases, we have the wrong model” (Works in Progress, 21 July 2022) <https://worksinprogress.co/issue/biases-the-wrong-model/#an-alternative-way-of-thinking-about-bias> (last accessed 8 August 2023); K Smets, “There is more to behavioural economics than biases and fallacies” (Behavioural Scientist, 24 July 2018) <https://behavioralscientist.org/there-is-more-to-behavioral-science-than-biases-and-fallacies/> (last accessed 12 June 2023).

28 OECD, Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit (Paris, OECD Publishing 2019), written in collaboration with Dr Pelle Guldborg Hansen of Roskilde University.

29 Hallsworth, supra, note 26, 315.

30 ibid.

31 OECD, supra, note 28; Hallsworth, supra, note 26.

32 OECD, supra, note 28, 84–88.

33 WEIRD stands for “Western, educated, industrialised, rich and democratic”. For example, see J Henrich, SJ Heine and A Norenzayan, “The weirdest people in the world?” (2010) 33 Behavioral and Brain Science 61.

34 H Brighton and G Gigerenzer, “The bias bias” (2015) 68 Journal of Business Research 1772.

35 Gayer and Viscusi, supra, note 18; Lucas and Tasic, supra, note 20.

36 OECD, supra, note 28, 84–88.

37 ibid, 72.

38 ibid, 70.

39 For example, see OECD, Behavioural Insights and Organisations: Fostering Safety Culture (Paris, OECD Publishing 2020), which used behavioural vignettes to test behaviours thought to be part of safety issues in regulated entities.

40 OECD, “Regulatory policy and COVID-19: behavioural insights for fast-paced decision making” OECD Policy Responses to COVID-19 (Paris, OECD Publishing 2020).

41 ibid.

42 S Owens, T Rayner and O Bina, “New Agendas for Appraisal: Reflections on Theory, Practice, and Research” (2004) 36 Environmental and Planning A 1943.

43 M Scharfbillig, L Smillie, D Mair, M Sienkiewicz, J Keimer, RP Dos Santos, HV Alves, E Vecchione and L Scheunemann, Values and Identities: A Policymaker’s Guide (Brussels, European Commission 2021); D Mair, L Smillie, G La Placa, F Schwendinger, M Raykovska, Z Pasztor and R van Bavel, Understanding our Political Nature: How to Put Knowledge and Reason at the Heart of Political Decision-Making (Brussels, European Commission 2019); M Hallsworth, M Egan, J Rutter and J McCrae, Behavioural Government: Using Behavioural Science to Improve How Governments Make Decisions (London, Behavioural Insights Team 2018).

44 CA Dunlop, O Fritsch and CM Radaelli, “The appraisal of policy appraisal” (2014) 149 Revue Francaise d’Administration Publique 163.

45 See discussion and limitations noted above.

46 C Cecot and RW Hahn, “Incorporating equity and justice concerns in regulation” (2023) (2024) 18(1) Regulation & Governance 99.

47 Although uncertainty is a feature of the analysis carried out in RIA, and it should be transparently reported (eg with confidence intervals and a sensitivity analysis), here we refer to the issues that uncertainty may create in the logic of reasoning.

48 SILE, Silent Agents Affected by Legislation (n.d.) <https://www.hiljaisettoimijat.fi/?lang=en> (last accessed 3 August 2023).

49 CR Farina and MJ Newhart, “Rulemaking 2.0. Understanding and Getting Better Public Participation” (2013) 15 Cornell E-Rulemaking Initiative Publications <https://scholarship.law.cornell.edu/ceri/15/> (last accessed 3 August 2023); O Perez, “Can Experts be Trusted and What Can Be Done About It? Insights from the Biases and Heuristics Literature” in A Alemanno and A-L Sibony (eds), Nudge and the Law (Oxford, Hart Publishing 2015).

50 Perez, supra, note 49.

51 CA Belton, DA Robertson and PD Lunn, “An experimental study of attitudes to changing water charges in Scotland” Working Paper no 654 (Dublin, Economic and Social Research Institute 2020).

52 See the ethnographic study by A Stevens, “Telling Policy Stories: An Ethnographic Study of the Use of Evidence in Policy-Making in the UK” (2011) 40 Journal of Social Policy 237.

53 D Macrae, “Compliance and Delivery Analysis” in CA Dunlop and CM Radaelli (eds), Research Handbook of Regulatory Impact Assessment (Cheltenham, Edward Elgar 2017).

54 For a discussion, see Radaelli, supra, note 13.

55 BD Jones and FR Baumgartner, The Politics of Attention. How Government Prioritizes Problems (Chicago, IL, University of Chicago Press 2005).

56 Radaelli, supra, note 13.

57 Dunlop and Radaelli, supra, note 8.

58 R Rose and T Karran, Taxation by Political Inertia (London, Macmillan 1987).

59 A Stark and B Head, “Institutional Amnesia and Public Policy” (2019) 26 Journal of European Public Policy 1521.

60 Drummond et al, supra, note 25.

61 OECD, 2021, supra, note 3.

62 CM Radaelli, L Allio, K O’Connor, R Alcorn and D Trnka, “Viewpoints and beliefs about better regulation: a report from the ‘Q Exercise’” (2022) 20 OECD Regulatory Policy Working Papers.

63 NAK Rantala and J Kuorikoski, “The logic of regulatory impact assessments: from evidence to evidential reasoning” (2023) Regulation & Governance <https://onlinelibrary.wiley.com/doi/10.1111/rego.12542> (last accessed 10 August 2023).

64 OECD, 2021, supra, note 3.

65 See discussion above in Section IV.

66 Supra, note 43.

67 E Linos, “Translating Behavioral Economics Evidence into Policy and Practice” (2023) report commissioned by the National Academies of Sciences, Engineering and Medicine <https://nap.nationalacademies.org/resource/26874/NASEM_Commissioned_Report_Linos.pdf> (last accessed 10 August 2023).

68 KCP Hirsch and G Wong-Parodi, “Activating an evidence-based identity increases the impact of evidence on policymakers beliefs about local climate policies” (2023) 2 Environmental Research: Climate 015008.

69 N Nakajima, “Evidence-Based Decisions and Education Policymakers” (2021) working paper <https://nozominakajima.github.io/files/nakajima_policymaker.pdf> (last accessed 10 August 2023).

70 CA Dunlop and CM Radaelli, “Teaching Regulatory Humility: Experimenting with Student Practitioners” (2016) 36 Politics 79.

71 ibid. On heroism, see CM Radaelli, “Future-Proofing Public Management” (STG Policy Analysis, 2021) <https://cadmus.eui.eu/handle/1814/71376> (last accessed 8 August 2023).

72 M Toma and E Bell, “Understanding and Increasing Policymakers’ Sensitivity to Program Impact” (2023), working paper <https://drive.google.com/file/d/1qkStG3Y-FZvizfUzQwUP2GkD7hzMlM-I/view> (last accessed 10 August 2023).

73 A Gawande, The Checklist Manifesto: How to Get Things Right (London, Picador 2011).

74 We concur with the STS position taken by H Collins, R Evans and M Weinel, “STS as science or politics?” (2017) 47 Social Studies of Science 580.

75 OECD, “A closer look at proportionality and threshold tests for RIA” (2022), annex to OECD, Best Practice Principles on Regulatory Impact Assessment (Paris, OECD Publishing 2020) <https://www.oecd.org/regreform/Proportionality-and-threshhold-tests-RIA.pdf> (last accessed 8 August 2023).

76 Radaelli, supra, note 13; De Francesco et al, supra, note 13; F De Francesco, CM Radaelli and V E Troeger, “Implementing Regulatory Innovations in Europe: The Case of Impact Assessment” (2012) 19 Journal of European Public Policy 491.

77 Supra, note 25.

78 L Foster, “Applying behavioural insights to organisations: theoretical underpinnings” (10 May 2017, EU-OECD Seminar Series on Designing Better Economic Development Policies for Regions and Cities) 5 <https://www.oecd.org/cfe/regionaldevelopment/Foster_Applying-Behavioural-Insights-to-Organisations.pdf> (last accessed 24 January 2024).