Mandates to disclose information have become a standard government response to problems of asymmetric information in most industrialized countries. U.S. federal agencies, like those in other industrialized democracies, use this regulatory strategy in areas ranging from automobile fuel efficiency labeling to food labeling, mortgage lending, and consumer finance. Requirements that private parties divulge certain information prior to specific transactions are intended to ensure that potentially uninformed parties, such as consumers or individual investors, are adequately informed.
Executive Order 13563 issued by President Obama emphasized the importance of considering information disclosure strategies as an alternative or a complement to prescriptive regulatory approaches. The U.S. Office of Information and Regulatory Affairs (OIRA), has also highlighted information disclosure as a federal regulatory strategy, placing special emphasis on testing such approaches before they are disseminated, as well as ongoing monitoring of their effectiveness.
In doing so, it has called for scientifically valid, randomized experiments to test effects of alternative methods of disclosure before imposing a specific design (OIRA, 2010a
In this paper, we explore the extent to which federal regulators are adopting analytic methods to evaluate the effectiveness in increasing comprehension of mandated information disclosure, in response to this Administration’s emphasis on information disclosure.
Such methods can be seen as consistent with the benefit-cost analyses required by E.O. 13563 and its predecessors such as E.O. 12866.
These E.O.s and the related A-4 guidance (Office of Management and Budget, 2003) from the federal Office of Management and Budget focus on regulatory action to address market failures and, in particular, they recommend the use of mandated information disclosure as an appropriate response to information asymmetries. Thus, we accept that the purpose of mandated information disclosure is to correct an information deficiency – and not to alter consumer behavior.
We focus here on comprehension as a measure of effectiveness because it would seem to be a necessary first step for claims that consumers are better off from the information disclosure. In general, if the market failure is asymmetric information and the purported solution involves information disclosure then improved comprehension of that information should be seen as necessary for a proposed disclosure to be an effective remedy to the problem. Put differently, claims of gains from mandated information disclosure without improvements in comprehension should be seen as questionable. Of course, assessment of comprehension entails additional costs that should be considered in decisions associated with the evaluation of mandated information disclosures.
Although federal mandates to disclose information underpin a number of flagship regulatory initiatives, we have found only a very few exceptional cases where there is any evidence that the responsible regulatory agencies conducted a quantitative evaluation of the effects on comprehension or comparable measures of effectiveness. The absence of information on effectiveness includes the failure to conduct both prospective analyses of new requirements and retrospective analyses of information disclosures that have already been mandated. As a result whether federally mandated information disclosures are generally effective at ameliorating the problem of asymmetric information is, at this point based on a very limited set of anecdotes.
Our finding that very little is known about the effectiveness of federally mandated information disclosures on improving comprehension may be surprising. While it is costly and time consuming to acquire data appropriate to evaluate the costs and benefits of most federal regulations, data sufficient to make inferences about the effects of information disclosures on comprehension can generally be obtained relatively quickly and at relatively low cost. Further, studies of comprehension have been conducted by federal agencies – albeit in a limited number of cases – and should be viewed as a practical first step for agencies’ mandatory information disclosures given current budget constraints.
Our analysis leads us to several recommendations to foster additional quantitative analysis of the effectiveness of federally mandated information disclosures. We suggest that major federal rulemakings mandating information disclosure be accompanied by a rigorous evaluation of its effectiveness at improving comprehension. We recommend that OIRA revisit and consolidate directives on information disclosure, including clarifying best practices and streamlining OIRA review, under the Paperwork Reduction Act, of scientific studies of effectiveness. We also recommend that Congress increase appropriations for FTC’s consumer research, given its leadership in this area.
The rest of this paper presents background information on the estimation of the effects of federally mandated information disclosure, methods and data, a systematic analysis of economically significant federal rules issued over six years, selected case studies, and a discussion and conclusions.
Economists’ recognition that mandatory information disclosure may be ineffective in changing beliefs or understanding is nearly 35 years old, e.g., Beales, Craswell and Salop (1981), even if this notion is not prominent in the recent literature.
More broadly, difficulties in understanding information, especially information pertaining to risk, may be seen as manifestations of the heuristics and biases in judgments under uncertainty identified by Tversky and Kahneman (1974). They documented systematic errors in thinking of normal people and traced those errors to the machinery of the cognition process (Kahneman, 2011).
The rise of mandatory information disclosure as government policy has been accompanied by substantial scholarship about such disclosure and its effects. Weil, Fung, Graham and Fagotto (2006) and Fung, Graham and Weil (2007) provide a qualitative appraisal of circumstances where mandatory disclosure of information is effective, assuming a broad definition of effectiveness. They argue that disclosure is effective only if the disclosed information becomes “embedded” in the everyday decision-making routines of both information users and disclosers. Weil et al. (2006) note “Even if valuable and compatible with users’ routines, [newly disclosed] information is unlikely to become embedded in everyday choices unless it can be readily understood.”
In a review of Fung et al. (2007), Winston (2008) reviews existing empirical studies to determine whether the information policies identified by Fung et al. have significantly reduced the costs to consumers created by imperfect information. On the basis of this review, Winston concludes that government mandated information policies have “amounted to weak solutions in search of a problem.” He suggests that more empirical evidence about the role of laws and regulations mandating disclosure is needed. More recently, Ben-Shahar and Schneider (2014) argue that mandatory information disclosure has become ineffective and represents a failed approach to solve a wide variety of policy problems.
Several recent papers have focused on specific disclosure policies, including, for example, building energy performance (Hsu, 2014), carbon emissions (Matisoff, 2013), chemical releases (Finger & Shanti, 2013), lead-based paint (Bae, 2012), and electric power emissions (Delmas, Montes-Sancho & Shimshack, 2010). These papers do not, however, assess the effectiveness of federal disclosure policies in remedying information asymmetries relevant to buyers or sellers engaging in specific transactions.
There is little systematic research on the reasons why information disclosures fail. We examine whether regulators adequately evaluate the extent to which mandatory disclosures successfully convey the required information to consumers. We offer a focused assessment of mandated information disclosures as the standard government response to information asymmetries. We concentrate on comprehension or understanding – a specific measure of effectiveness – because it seems to underpin the action cycle of Weil et al. (2006), it is uniquely under regulators’ control, and recent OMB documents highlight its policy importance. Our approach may be seen as complementary to that of other researchers who have questioned the effectiveness of current disclosure policies (e.g., Ben-Shahar & Schneider, 2014; Guthrie, 2006). It goes beyond researchers who have focused in one area, (e.g., Willis (2006), regarding mortgage lending), by suggesting a remedy that is fairly broadly applicable: agencies should conduct prospective and retrospective testing of mandated disclosure instruments to determine their effectiveness.
Assessments of how well mandatory information disclosures improve consumers’ comprehension of the context and consequences of their decisions can provide information that helps inform regulatory policy decisions. Regulators often can choose between either a ban on products that have adverse consequences, or a mandate for the disclosure of relevant information. Product bans may reduce risk but at the cost of consumer choice. Mandatory information disclosure may promote informed choice, but at the risk of potential misunderstanding of this information by consumers.
In addition, mandatory disclosure will change behavior to address public policy goals in the desired way only to the extent that market participants respond to the disclosure as desired. Where regulators have adopted mandatory disclosures, assessments of the effectiveness of alternative forms of disclosure can help to improve comprehension of key product characteristics.
OIRA has also made recommendations regarding the potential role of information disclosure as a regulatory tool in recent guidance and in its reports to Congress. These recommendations recognize that unduly complex disclosure requirements might fail to inform consumers, a notion consistent with a longstanding awareness of potential difficulties in effectively communicating information (e.g., Beales et al., 1981). As a result, OIRA (2009) urges that agencies study in advance – to the extent possible – the actual effects of alternative disclosure designs.
OMB’s recent guidance outlined some of the elements appropriate to an evaluation of “significant” information disclosures (OIRA, 2010b
In these cases, OIRA stated that agencies should test several methods of disclosure before imposing one, use scientifically valid and randomized experiments, and elicit information about actual choices and behavior.
The OIRA (2010a
) recommendation to use “scientifically valid” and “randomized” experiments is more specific than earlier OMB advice. But the recommendation does not identify a preferred methodological approach or identify a specific example of best practices.
An FTC study demonstrates the importance of careful evaluation in determining the effectiveness of mandatory information disclosures (Lacko & Pappalardo, 2007, 2010). Consistent with the OIRA guidance (OIRA, 2010c
, 2012) and following Lacko and Pappalardo (2004), Lacko and Pappalardo (2010) focused on quantitative testing that is scientifically valid because it uses random assignment and controls (although it is not blinded). Lacko and Pappalardo conducted a qualitative study prior to the quantitative testing. In addition, Lacko and Pappalardo (2010) focused on understanding or comprehension by the relevant population (i.e., recent home mortgage borrowers) – an approach suitable for measuring “actual effects,” following Executive Order (E.O.) 13563. The study surveyed more than 800 recent mortgage borrowers to examine how well borrowers in 2007 understood the terms and conditions of their mortgages, as described on mandatory disclosure forms. It evaluated consumers’ comprehension not only in absolute terms but also relative to comprehension assessed using an alternative information disclosure form that the authors had designed themselves. Lacko and Pappalardo (2007) reported that their prototype form significantly improved consumers’ understanding:
“The prototype disclosures provided improvements across a wide range of loan terms and for substantial proportions of respondents. The improvements provided by the prototype form included:
∙ 66% point increase in the proportion of respondents correctly identifying the total amount of up-front charges in the loan.
∙ 43% point increase in the proportion of respondents recognizing that the loan contained charges for optional credit insurance.
∙ 37% point increase in the proportion correctly identifying the amount borrowed.
∙ 24% point increase in the proportion recognizing that a prepayment penalty would be assessed if the loan was refinanced in two years.
∙ 21% point increase in the proportion correctly identifying why the APR and interest rate may differ in a loan.
∙ 16% point increase in the proportion correctly identifying the APR amount.
∙ 15% point increase in the proportion correctly identifying the amount of settlement charges.” (Lacko & Pappalardo, 2007, p. ES-8)
Lacko and Pappalardo (2010) demonstrated avoidable, major, and statistically significant inadequacies in consumer comprehension of pertinent mortgage information resulting from the use of federally mandated disclosure forms. These inadequacies existed even in a case like residential mortgages where the individual and aggregate stakes are high and federal involvement is both extensive and well established. Lacko and Pappalardo concluded that in this case the mandatory disclosures confused consumers, leading many to choose loans that were more expensive than the available alternatives. They showed that careful quantitative assessment of comprehension of disclosed information may be necessary for revising information disclosure forms to achieve significantly better performance. Implicitly, they viewed the time and paperwork costs of disclosing the information and documenting the disclosure as small relative to the gains to borrowers from being much better informed about the terms of their mortgage. More recently, Perry and Blumenthal (2012) also argue that better-designed research could lead to significant improvements in public policy and consumer protection and result in a more efficient allocation of resources. Thus, federally mandated information disclosure, if not tested in rigorous randomized experiments, may not work as intended and may in fact leave many people uninformed, if not misled.
3 Methods and data
We adopt a three-pronged approach, consisting of an automated text search of the universe of major rules issued over six years, an analysis of reports of retrospective regulatory reviews conducted by four major regulatory agencies, and selected case studies. Our systematic review tries to identify all analyses by federal regulatory agencies of the effectiveness of mandatory information disclosures in major final regulations issued from fiscal year 2008 through fiscal year 2013 (FY13). In total, we identified 357 major final rules, including interim final rules. For all of these rules, we conducted an automated search using Adobe Acrobat to identify the use of selected key words related to assessments of the effectiveness of mandatory information disclosure. We believe that this search procedure provides a sound basis for making rigorous statements about the content of the final regulations in question.
Second, we evaluated the retrospective regulatory review reports that agencies are required to prepare and publish to comply with E.O. 13610, Identifying and Reducing Regulatory Burdens, issued in July 2012. We focus on diverse U.S. regulatory agencies with large regulatory programs: the Environmental Protection Agency (EPA), the Department of Energy’s Office of Energy Efficiency and Renewable Energy, and the Food and Drug Administration and the Centers for Medicare and Medicaid Services, both of the Department of Health and Human Services.
In addition, we considered a few prominent regulatory initiatives for which information disclosure was a key regulatory strategy. For these initiatives we reviewed the relevant final rules and assessed the analysis the agency conducted to evaluate the effectiveness of mandatory information disclosure. This in-depth appraisal of a few recent regulations allows us to show how information disclosure regulation might be made more effective at relatively low incremental cost.
We turn below to each prong of our analysis.
4 Systematic analysis
To carry out our systematic review, we collected electronic copies of all Federal Register Notices for all final and interim final rules that OIRA identified as major in its annual reports to Congress for FY08 to FY12, e.g., OIRA (2010c
We therefore instead compiled a list of final and interim final regulations that are economically significant by using the search tool on the “Historical Reports” page of OIRA’s website (http://www.reginfo.gov/public/do/eoHistoricReport), making adjustments for the differences between the date that OIRA concluded its review and the date of publication in the Federal Register. We have not included major or economically significant final rules issued by independent regulatory agencies in FY13, as we are unaware of a centralized listing of such rules. For the 357 major rules identified through this search, executive branch agencies issued 278, and independent regulatory agencies, such as the Securities and Exchange Commission and the Consumer Financial Protection Bureau (CFPB), issued 79 (22%).
We then searched using Adobe Acrobat for keywords that might permit us to characterize the subset of these major rules that mandate information disclosure. We found 217 with the word “disclosure,” 141 with the word “label” and 106 with the words or word fragments “disclos,” which could be disclosure or disclose, “label,” or “information collection,” a term of art in implementing the Paperwork Reduction Act. Fifty seven of the 357 major final rules contained the phrase “must disclose.” We note, however, that these occurrences of key words may not indicate a mandated information disclosure. For example, a search for “label” may turn up rules that do not mandate information disclosure, if “label” appears in the preamble to describe a rule other than the new rule being issued. Or the search results may exclude some rules that mandate disclosure by using synonymous language such as “regulated firms shall be required to add the following information to their stickers” instead of “label.” The search results might therefore overestimate or underestimate the number of new rules on labeling.
We then extended our search to look for rules that included the keywords “assess” and “comprehension.”
We identified 16 final rules that appear to be the most likely to have been prepared using quantitative assessments of the effects of mandatory information disclosure on the target audience. [Appendix A provides a list of the 16 rules.] A more detailed review indicated that of these 16 rules, five mandated disclosure of product information that would be helpful to consumers or investors. In addition, one rule issued by the Occupational Health and Safety Administration (OSHA) mandated disclosure of information pertinent to workers’ safety. A second OSHA rule addressed communication of health hazards in the workplace and pointed to a pre-2000 literature that examined the effectiveness of hazard communication. But for this rule OSHA did not conduct any new studies to assess comprehension by workers or consumers of the mandated disclosure relative to alternative forms of disclosure. Only one of these rules, issued by the Board of Governors of the Federal Reserve to regulate consumer finance, contained a quantitative assessment of comprehension.
5 Analysis of the retrospective regulatory review reports of selected federal agencies
President Obama’s E.O. 13,563 directs executive branch agencies to conduct retrospective review and analysis of extant federal regulations. We focus on the most recent retrospective regulatory review reports of four federal agencies: EPA, Department of Energy’s Office of Energy Efficiency and Renewable Energy (DOE/EE), and Department of Health and Human Services’ Centers for Medicare and Medicaid Services (HHS/CMS) and Food and Drug Administration (HHS/FDA).
We selected these agencies because their regulatory programs are quite different, they are widely known to have extensive regulatory programs, and they are all executive branch agencies subject to OMB oversight.
In studying the retrospective reports of these four executive agencies – EPA (2013), DOE/EE (2013), HHS/CMS (2013), and HHS/FDA (2013) – we looked for rules with information disclosure requirements for which retrospective review was either ongoing or completed. We then reviewed both the “brief description” included in the reports to describe the nature of the disclosure, the nature of the retrospective review that the agency had undertaken, and whether the agency had undertaken a quantitative assessment of third-party comprehension for any required information disclosure. The retrospective regulatory review reports do not indicate that the agencies conducted any quantitative assessments of comprehension or comparable measures of effectiveness as part of their retrospective regulatory reviews. Further, none of these agencies included a review of an existing information disclosure requirement based on a quantitative analysis of the effectiveness of such a requirement.
6 Case studies
We consider several case studies, which we selected (1) to illustrate the promise of informational remedies to classic problems of informational asymmetry in the markets for goods and services; and (2) to demonstrate the importance of retrospective analyses of the effectiveness of information disclosure. All but one of these case studies fall outside the scope of our systematic search (discussed above) because they either were not designated as major rules or involve rulemakings outside the time period of our systematic search.
The four case studies are (1) EPA and the National Highway Traffic Safety Administration’s (NHTSA) fuel economy labeling rule for cars and light-duty trucks; (2) FTC’s revision of its consumer product labeling rules; (3) the Federal Reserve’s major final rule regarding disclosure of terms and conditions for use of revolving credit unsecured by a home, mostly consumer credit cards (Truth in Lending, 2009); and (4) the Consumer Finance Protection Bureau (CFPB, 2013c
) final rule governing residential mortgage disclosures.
1. EPA/NHTSA fuel economy labeling for cars and light-duty trucks
Recent reports to Congress from OIRA specifically cited a revision by EPA/NHTSA to fuel economy labels as an example of an effort to study the effectiveness of alternative fuel label designs (OIRA, 2009, p. 37, OIRA, 2011, p. 53).
EPA and NHTSA issued the joint 2011 final rule to implement the Energy Independence and Security Act of 2007. They identified it as the most significant overhaul of the federal government’s fuel economy label since its inception 30 years ago (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2011).
The agencies sought to improve the label to give consumers more information about vehicle performance. They also believed the time was right to develop new labels for advanced-technology vehicles (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2010).
The rule adopted a structured program to inform the development of the revised fuel labels, including the use of three focus groups, a daylong session with an expert panel (drawn from advertising and product development experts), and an Internet survey to evaluate reactions to three alternative label designs. The focus groups and the expert panel session were not designed to provide quantitative results; instead, they were intended to help the agencies understand consumers’ comprehension and preferences in making car-buying decisions (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2011).
The online survey (conducted following the release of the proposed rule) was designed to determine whether any of the proposed label designs had serious flaws that would undermine its effectiveness, particularly with respect to consumers’ abilities to choose fuel-efficient vehicles (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2011). Respondents (self-identified new-car buyers) were shown only one of the three proposed label designs for two different vehicles.
The respondents were asked a series of questions to assess their understanding of information on the label and to detect any differences in vehicle selection. Overall, the results showed only small differences in respondents’ understanding and in vehicle selection with each of the alternative designs and did not find any “fatal flaws” (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2011, p. 39483).
There are some important issues with the EPA/NHTSA evaluation. First, the Internet survey was based on a convenience sample – it was not designed to be representative of any larger group of new-vehicle buyers, and the actual response rate was low, so the responses reflect the experiences only of those who completed the survey.
Second, respondents were asked to respond to one of three new alternative label designs. No comparison was made with the existing labels (adopted in 2006), and the Internet study did not have a well-designed “control” for comparison purposes.
As a result, it is not possible to determine whether the revised label for gasoline-fueled vehicles improves or degrades consumers’ comprehension relative to the existing 2006 label. This issue is of some importance because most consumers will continue to buy gasoline-fueled vehicles.
Finally, the study focused on consumers’ selection of electric and other advanced-technology vehicles. Most of the label pairs used in the survey involved a comparison of electric vehicles with various attributes; only a few pairs involved a comparison between a gasoline vehicle and an electric vehicle. There were no pairs comparing labels for two gasoline vehicles with different attributes.
In particular, the study did not address one of the key issues – the frequent misunderstanding by consumers of the miles-per-gallon (mpg) measure of fuel economy (OIRA, 2009).
This particular example highlights the tension between a general concern with consumers’ comprehension of information provided by a label and the agency’s interest in achieving a specific policy objective – in this case, promoting the sale of advanced-technology vehicles.
The agencies acknowledged the central importance of using the best available research to inform judgments about disclosure requirements, stating that they “will continue to consider such research in the future (including, where feasible and appropriate, randomized controlled trials)” (Revisions and Additions to Motor Vehicle Fuel Economy Label, 2011, p. 39483). However, there was no mention in the final rule of the retrospective review requirements of E.O. 13563 or of E.O. 13610, and the final rule did not discuss any plans for a prospective review (or retrospective review) of the effectiveness of the final fuel labels.
2. FTC revision of energy efficiency labels
The Energy Policy Act of 2005 required FTC to consider changes to its consumer product labeling programs to improve their effectiveness for assisting consumers in making purchasing decisions and improving energy efficiency of consumer appliances. In response to this mandate, FTC launched a rulemaking to review and amend its Appliance Labeling Rule. (The FTC completed this rulemaking in 2007 prior to the period covered by our systematic review.) To support the rulemaking, FTC held a workshop and conducted consumer research to evaluate the effectiveness of alternative label designs.
The consumer research study was based on a sample of roughly 4000 respondents drawn from an Internet panel of more than 4 million individuals. The respondents in the sample were recent or likely buyers of major appliances. The preamble acknowledges that the sample for this research was not nationally representative in the “classic sense,” but argued that the contractor had taken steps to minimize the differences between samples from its Internet panel and true probability samples. The FTC staff, therefore, believed that the sample fairly represented the population of appliance buyers (Appliance Labeling Rule Notice of Proposed Rulemaking (NPR) 2007).
The study focused on four label designs: the then-current Energy Guide label presenting information on energy use, a revised version of the then-current label, a label featuring operating costs, and a categorical “five-star” label reflecting the energy efficiency of the appliance relative to DOE minimum standards. The study provided a well-defined baseline by evaluating the three alternative label designs with the existing Energy Guide label. The study randomly assigned respondents to different label design groups. Each respondent was shown only a single label design and asked questions designed to determine its effectiveness in conveying information on operating costs and energy use and efficiency. Respondents were also asked questions about product quality to determine whether there were differences across the label designs in information conveyed on product quality.
The final rule requires that labels for major appliances provide operating cost as the primary disclosure; energy use data – formerly the primary disclosure – would appear as secondary information. The FTC reported,
The research suggests that the operating cost disclosure provides a clear, understandable tool to allow consumers to compare the energy performance of different models. The operating cost design not only performed well on objective tasks in the research but research participants identified the design as the most useful method for communicating energy information …” (Appliance Labeling Final Rule, 2007b
, p. 49953)
Although FTC recognized the potential value of a categorical label design, the consumer research study found that buyers might misinterpret the “categorical” five-star label. Some respondents interpreted the stars as representing a broader quality rating, and some respondents also believed that it indicated that the product was a participant in EPA’s Energy Star program; in both cases the difference in the misinterpretation by respondents was statistically significant compared with the alternative label designs (Appliance Labeling Rule NPR, 2007). As a result, FTC did not adopt the categorical five-star label, even though it received strong support from several important consumer and constituent groups.
3. Consumer finance
In January 2009, the Federal Reserve Board of Governors (“the Board”) issued a Truth in Lending final rule, which is a major final rule requiring disclosure of terms and conditions for use of revolving credit unsecured by a home, mostly consumer credit cards. This final rule used both qualitative and quantitative consumer testing. We identified this rule as part of our systematic review. The rule stated, “The goal of quantitative testing was to measure consumers’ comprehension and the usability of the newly-developed disclosures relative to existing disclosures and formats” (Truth in Lending, 2009, p. 5248).
The quantitative consumer testing consisted of mall-intercept interviews of a total of 1022 participants in seven cities (Truth in Lending, 2009). In each interview the questioner showed the participant models of the summary table provided in direct-mail credit card solicitations and applications and the periodic statement (Truth in Lending, 2009). The questioner then asked a series of questions designed to assess the effectiveness of certain formatting and content requirements proposed by the Board or suggested by commenters (Truth in Lending, 2009). These questions led to 13 quantitative findings about the relative superiority of different types of disclosure (Truth in Lending, 2009). For example, the fourth finding was that study participants who saw forms that “listed the over-the-credit limit fee on the back of the page [as opposed to the front page] were statistically less likely to be able to identify this fee (40.6% vs. 68.7%,
)” (MacroInternational, 2008, p. iii).
The Board then used the quantitative testing results in a variety of ways in its rulemaking. One example pertaining to payment allocation illustrates that quantitative testing can play a critical, even decisive, role. “Payment allocation” refers to the methods that banks use to allocate consumers’ payments among different balances associated with different interest rates all within the same consumer account. The testing associated with the final rule examined whether consumers’ understanding of payment allocation practices could be improved through disclosure (Truth in Lending, 2009). The results showed that disclosure, even of the relatively simple payment allocation practice of applying payments to lower interest balances before higher interest balances, improved understanding for only a very few consumers. The disclosure also confused some consumers who had understood payment allocation based on prior knowledge before reviewing the disclosure. Based on this result, quantitative testing led the Board to reject this specific part of its information disclosure, because of inadequate consumer comprehension, in favor of a regulatory approach.
A second example involved the effective annual percent rate (APR), a summary measure of the average cost of borrowing that reflects both interest and other finance charges, such as cash advance fees or balance transfer fees imposed for the billing cycle. Testing under the final rule overwhelmingly showed that few consumers understood the disclosure and that some consumers were less able to locate the interest rate applicable to cash advances when the effective APR also was disclosed on the periodic statement. Accordingly, the final rule also eliminated the requirement to disclose an effective APR for open-end (not home-secured) credit.
Because it was based on a convenience sample in shopping malls in seven cities, this analysis is not necessarily representative of borrowers and has unknown sampling error.
4. Residential mortgage lending
Decisions to buy and finance residential real estate are some of the most important transactions that American families make – a fact illustrated vividly by the human toll from the 2008–2009 crash in real estate prices. Thus, it is not surprising that regulatory agencies have conducted consumer testing of some of the federally mandated information disclosures. The CFPB, created by the Dodd–Frank Wall Street Reform and Consumer Protection Act of 2010 (“Dodd–Frank”), is well positioned to be a leader in the use of quantitative studies of consumer comprehension. Dodd–Frank’s primary charge to CFPB is “[e]nsuring that consumers have timely and understandable information to make responsible decisions about financial transactions” (CFPB, 2013a
, p. 7).
CFPB took a significant step toward data-driven consumer protection in developing the final rule governing residential mortgage disclosures (CFPB, 2013c
). The CFPB completed this rulemaking in late 2013 after the period covered by our systematic review. In developing this rule, CFPB used extensive qualitative and quantitative testing of its new Loan Estimate Form and the Closing Disclosure Form, relative to alternative forms, including those already in use. Importantly, in its quantitative testing, the CFPB randomized the disclosure type and the loan type, and the “initial” loans that the respondents saw (CFPB, 2013b
and Kleimann Communication Group, 2013). In addition, CFPB studied subjects recruited from nationally representative random samples. Further, the study design compared comprehension of the new and old forms.
CFPB reported significant gains in comprehension with the new forms. For example, the new forms outperformed the current forms, with a 49% improvement in understanding the amount borrowed, and a 37% improvement in understanding the total monthly payment (CFPB, 2013b
, Table 8.1).
In the rare instances where agencies have conducted high-quality assessments of comprehension of mandated information disclosures, these analyses have also provided useful information. For example, Lacko and Pappalardo pointed the way to revising confusing disclosure forms that lenders were required to present to mortgage borrowers. The CFPB study in 2013 showed that the revised forms offered significantly better comprehension.
OIRA guidance points to several elements that might comprise “best practice” in evaluating the effectiveness of an information disclosure, including whether the respondents “…understand the disclosure, [and] whether they remember the relevant information when they need it …” (OIRA, 2010a
, p. 101). In addition, the guidelines state that “Scientifically valid experiments are generally preferable to focus group testing, and randomized experiments can be especially valuable” (OIRA, 2010a
, p. 101).
Earlier OIRA (2006) guidance also offered direction on the structure of a scientifically valid experiment. First, a valid study would need a representative sample of program participants to accurately describe the population and generalize the results. For example, comprehension may differ significantly across different groups in the population, such as borrowers willing and able to secure a conventional mortgage and those who consider balloon payments and variable interest rate mortgages. Thus, refined quantitative testing of consumer comprehension may need to consider effects of alternative disclosures on different populations to best address problems with imperfect information. In addition, OIRA (2006) also cautioned about the use of nonprobability samples, such as convenience samples and Internet panels for federal surveys.
For these nonprobability samples, the results cannot be generalized to any target population using traditional statistical criteria. Finally, OIRA (2006) also pointed to the response rate as a key indicator of the quality of the survey and the extent to which the survey results can be generalized to the target population.
Critics have argued that OIRA’s implementation of the Paperwork Reduction Act represents a barrier to a variety of scientific studies, including studies evaluating the effectiveness of mandated information disclosures.
In response to this criticism, OIRA (2010a
) has moved to streamline its review process by establishing a generic clearance process for specific types of information collections. In a memo titled “Facilitating Scientific Research by Streamlining the Paperwork Reduction Act Process,” OIRA (2010d
) has outlined options and strategies for agencies to use to streamline the process of getting OMB approval of research-related information collections. OIRA could establish a similar streamlined review process for agency studies evaluating mandated information disclosures that adopted best practices along the lines identified below.
Based on this OMB guidance, Lacko and Pappalardo (2010), and our review of some recent agency studies about information disclosures, we have identified the following as most appropriate practices for the evaluation of agency-mandated information disclosures with a focus on consumers’ comprehension. Such evaluations should
∙ elicit information on comprehension;
∙ be based on a representative sample (not a convenience sample) and yield an acceptably high response rate;
∙ use random assignment;
∙ adopt a well-defined baseline or appropriate controls or comparisons;
∙ assess reasonable alternative disclosures to evaluate their effectiveness; and
∙ use appropriate statistical tests in assessing the effectiveness of alternative information disclosures.
Lacko and Pappalardo (2010), the FTC study associated with its Appliance Labeling Final Rule (2007b
), and the CFPB residential mortgage study are largely consistent with this guidance. They provide randomized controlled experiments to evaluate respondents’ understanding of several alternative forms of information disclosure.
Our review, however, finds only a very few cases where agencies have undertaken a quantitative assessment that largely conforms with the best practices identified above, notwithstanding presidential and OMB directives that they conduct high-quality assessments.
Given the emphasis that this administration has placed on such assessments, it is surprising that they are so scarce because benefit-cost analysis typically requires more time and money than the rigorous quantitative assessments of comprehension discussed here. From the standpoint of senior executives at regulatory agencies, conducting such assessments offers little advantage. First, it requires real resources – perhaps $100,000 or more in contractor support, and substantial time from specialized senior analysts and managers with the skills to oversee such work. These costs are relatively small compared with the potential social benefits, but they can discourage managers from undertaking what are essentially discretionary projects during times of tight budgets. Second, these studies may require six months or more to complete and thus may delay the regulatory decisions that political appointees wish to make during the two to three years they expect to lead their agencies. Third, a rigorous, quantitative assessment of comprehension does not necessarily point to more or less stringent regulation and thus is not naturally embraced by political appointees with incentives to pursue either the former or the latter.
It is worth noting that the lack of high-quality analysis of the effectiveness of mandated information disclosures is distinguishable from a broader problem of flaws in federal regulatory analysis highlighted e.g., by Hahn, and Tetlock (2008), Ellig and Morrall (2010), Ellig, McLaughlin, and Morrall III (2013), and Fraas and Lutter (2011).
We offer three types of recommendations to remedy the lack of credible evidence regarding the effectiveness of federally mandated information disclosures: (1) additional analyses of consumers’ comprehension in rulemakings where more analysis would be consistent with recent executive orders and OMB directives; (2) improved management and reporting by OMB of analyses of consumers’ comprehension of mandated disclosures; and (3) legislative action to address the apparent torpor of some regulators regarding such analyses. As a practical matter, however, reform need not require legislative action. The OMB already retains authority under various E.O.s (e.g., E.O. 12866 and E.O. 13563), as well as the Paperwork Reduction Act, to require that regulatory agencies conduct the analysis that OIRA has recommended.
(1) As articulated in executive orders and OMB directives, major ongoing federal rulemakings mandating information disclosure or prohibiting certain high-risk products merit a full and fair evaluation of the effectiveness of mandatory disclosure, either as a policy or as an alternative to a product ban.
(2) We recommend that OMB consolidate and formalize its directives on information disclosure by issuing specific guidelines on best practices, along the lines outlined above, to evaluate the effectiveness of mandated information disclosures. It should include in this directive a streamlined process of Paperwork Reduction Act review for studies that incorporate such best practices. In addition, OIRA in its review of agency rules under EO 13563 should ensure that agencies complete evaluations of the effectiveness of mandated information disclosures as a precondition for issuing regulatory proposals. Perhaps most importantly, OIRA, in its annual report to Congress on the benefits and costs of federal regulations, should provide additional information about significant federally mandated information disclosures. Specifically, it should identify rules mandating disclosures that are supported by analyses that comply with the standards articulated above, as well as rules where the available analyses fall short of such standards.
(3) Finally, given FTC’s leadership in this area, we recommend that Congress increase appropriations for FTC’s consumer research, with a specific call for additional work from FTC about the effectiveness of a designated list of extant and new federally mandated information disclosures.