Hostname: page-component-7479d7b7d-rvbq7 Total loading time: 0 Render date: 2024-07-10T17:32:47.140Z Has data issue: false hasContentIssue false

Studying Disputes: Learning from the CLRP Experience

Published online by Cambridge University Press:  02 July 2024

Abstract

This paper describes the data collection strategy of the Civil Litigation Research Project. It discusses many of the practical problems of choosing and implementing the research design and assesses the results of the data collection effort.

Type
Part Two-The Civil Litigation Research Project: A Dispute-Focused Approach
Copyright
Copyright © 1981 The Law and Society Association.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

An earlier version of this paper was presented at meetings of the Law and Society Association, Madison, Wisconsin, June, 1980. The work described in this paper could not have occurred without the extremely valuable efforts of Mathematica Policy Research, our survey subcontractor. We would particularly like to thank Lois Blanchard, Joey Cerf, Paul Planchon, and John Hall for the many long hours they put in on our study in their efforts to make it a success.

References

1 The Civil Litigation Research Project was initiated through a Request for Proposal (RFP) issued by the Office for Improvements in the Administration of Justice. That original RFP called on the researcher to use a mixed design of the type described above. The final study design differed from the vision of the original RFP; this section discusses many of the considerations that led to the design modifications.

2 This approach can create one technical problem if more than one participant in a particular dispute is interviewed: respondents are not selected independently of one another. Most statistical procedures require an assumption of “independent random sampling,” and this assumption will be violated if “respondent” is used as the unit of statistical analysis. This problem must be considered on an analysis-by-analysis basis, since it may not arise in many specific analyses; where it does arise, the technically correct solution is to select randomly one respondent from each case where there were multiple respondents.

3 One technical problem created by this sample design is that it constitutes a “multi-frame sample”; that is, different sampling frames are used to obtain various subsets of the overall sample. As long as analyses are confined to comparisons across strata and to comparisons within strata, the multi-frame sample does not raise problems. However, if analyses are to be carried out, which involved collapsing across strata, and particularly if the various relationships one is interested in looking at vary across strata (i.e., an analysis of discrimination cases across all “institutions”), then the multi-frame sample may produce misleading results. In order to combine the various samples, it is technically necessary to weight the observations from each sample to correspond with the relative frequency of responses in the various strata. To do this, one must have information on that relative frequency. We sought to obtain some crude information that could be used for weighting, though to date we have not implemented any weighting scheme for the main survey. This remains a weakness of the survey design we used.

4 As the term “middle range” suggests, we also excluded extremely large cases, defined as those cases with a court record of sufficient bulk or complexity to be beyond our resources to code. Overall, 37 cases were excluded from our sample as too large.

5 At one point we were planning to do a small-scale panel study to supplement our larger retrospective study; this panel study was ultimately dropped for budgetary reasons.

6 Our field staff developed a decision tree that was easily applied to each individual case. Before releasing a case for interviewing, we did additional screening using the coded material. An additional 10 percent of the cases was excluded for a variety of reasons, such as indications that the case was probably refiled in another court or because the case in our sample was a small part of a much larger case in another court.

7 A more detailed discussion for the rationale for each of these categories can be found in the survey clearance documents that we prepared for the Office of Management and Budget. These documents are available from the author.

8 The answer to the first question was dictated by the terms of our contract.

9 We eliminated from consideration districts without a major law school located in the city where terminated case records for federal cases were maintained, since we needed to recruit law students as case coders.

10 We defined alternatives as “institutions or facilities that provide dispute processing services, including hearings, other than as a required step in litigation that has already been initiated.”

Institutions or facilities is meant to include the American Arbitration Association, industry-organized arbitration, marriage counseling services, government administrative agencies, trade associations' consumer action panels, union review boards, and similar bodies that regularly provide dispute processing services. It is meant to exclude ad hoc mediation and arbitration. Ad hoc services are excluded because they are not, from a reform perspective, feasible alternatives to litigation: they cannot easily be provided or fostered by government.

Services including hearings is meant to exclude intermediaries such as officeholders, media action lines, and those government agencies that do not provide the opportunity for disputants to hear each others' arguments directly. These intermediaries were excluded because, given the limits to our research, it made sense to explore alternatives that employ due process approximately equivalent to that used by courts.

Other than already initiated litigation includes services that may terminate disputes, even though they may not be a complete substitute for litigation (administrative hearings).

Other than a required step includes cases in court where the disputants volunteer to use an in-court alternative (arbitration), but excluded that same service where it is involuntary. In the latter case the service is viewed as a step in litigation rather than an alternative to it.

The specific institutions sampled, in addition to the American Arbitration Association, were the Equal Rights Division of the Wisconsin Department of Industry, Labor and Human Relations, the Green Bay Zoning Board of Appeals, the Green Bay Planning Commission, the Philadelphia Board of View, the Philadelphia Commission on Human Relations, the Occupational Safety and Health Division of the South Carolina Department of Labor, the County Court Arbitration Program administered under the South Carolina Automobile Reparation Reform Act, the Construction Industries Division of the Commerce and Industry Department of New Mexico, the Employment Services Division of the Human Services Department of New Mexico, the Better Business Bureau of Los Angeles and Orange Counties, the Contractors' State Licensing Board of the California Department of Consumer Affairs.

11 The specifics of the method used are as follows. For each of the institutions in which we encountered this problem, cases were entered into the institutions' record keeping systems (i.e., the docket books) by date of filing; our goal, on the other hand, was to obtain a sample of cases terminated during 1978 (even though the cases may have been filed five or more years prior to 1978). The sample of terminated cases had to be drawn in such a way that the proportions of the cases in the sample that were filed in each year starting with 1978 and going back through time approximated the corresponding proportions in the population of 1978 terminated cases; however, the courts that did not have lists of cases terminated in 1978 also had no information on what we came to call the “aging profile” (i.e., the proportion filed in each prior year) of terminated cases. In order to draw samples in these institutions, we first had to construct aging profiles. We did this by taking a sample of five to eight docket books for the years 1970 through 1978 (the sample of docket books for each year contained about 2000 cases), and counting the number of cases listed in each docket book that were terminated in 1978. From this information we created an estimated aging profile. We then used this information to construct a cluster sample in the following way. We counted the number of docket books covering each year that we decided to include based upon the aging profile (we omitted years with virtually no cases terminated in 1978, typically 1970 through 1972 or 1973), and identified the starting and ending docket numbers in each book. We then randomly selected a docket book with the probability of selection for a docket book based upon the aging profile. For each docket book sampled, we randomly generated a cluster of five case numbers from the docket book; these five case numbers served as random start points for a search for cases terminated in 1978.

12 The one problem that we know occurred with this approach was an under-representation of “old” cases in Los Angeles. For some reason, our search points generally failed to turn up cases filed in 1973 or 1974. In retrospect, we probably should have formally stratified the sample by year filed, and then generated sufficient points for each year's strata.

13 We devised a set of rules to choose among eligible disputes for households reporting two or more disputes.

14 This technique has the property of weighting the probability of inclusion proportional to the number of telephone lines going into an organization; in effect, larger organizations were more likely to be included in the sample than were small, one-line organizations. We saw this as a desirable property for a sampling technique applied to organizations. In order to minimize the cost of the random-digit dialing operation, we drew upon phone numbers that we had identified as likely business numbers during the household screening survey, supplemented by other phone numbers identified as business phones during the random-digit dial surveys in our geographic areas.

15 Because of the nature of organizational responsibilities and modern telephone systems, initial calls often produced referrals to someone at a different phone number; we accepted this as part of the process of dealing with organizations. In cases where we were referred to offices outside the area that we had initially called, we asked our eventual respondent to identify a dispute in the original geographic area.

16 One reason for this was that in the early stages of the study we thought it would be necessary to employ lawyers as interviewers in order to secure the cooperation of lawyer respondents. Pretests (and later experience) demonstrated that this was not necessary.

17 There were a variety of ambiguous situations where two of the survey instruments could be used (e.g., should an inside lawyer be interviewed with the lawyer instrument or the organizational instrument). We adopted a set of rules to resolve these ambiguities.

18 Another operational question was whether to conduct the interviews in person or by telephone. Cost considerations in the end dictated telephone interviews. We were warned by several survey experts that we would have tremendous difficulty with the telephone medium given the length of our interviews. As best as we can tell, the medium did not generate these problems. We had few broken-off interviews, and a refusal rate within the range we had expected.

19 The numbers of screening interviews conducted were 1508 and 5148 for the organizational and household surveys, respectively.

20 In order to control costs, we organized the surveys to release batches of cases into the interviewing pool. In the end, we found that we were exhausting our survey budget and decided not to release the last batch of cases.

21 For reasons of expertise and practicality, we subcontracted the major survey operation, including much of the instrument preparation and pretesting, to an experienced survey organization, Mathematica Policy Research, of Princeton, N.J. The only survey that we actually conducted “in house” was the organizational screening survey.

22 One extremely interesting question for the sociology of law is the relationship between disputants and their lawyers; we estimate that included within our data set are about 600 lawyer-client pairs; 370 involve “long” interviews for both the disputant and the lawyer.

23 As is always the case, we realize after looking at our data that we failed to ask some questions we should have asked, and that we should have asked some questions in a different way.

24 We used two techniques for identifying organizational spokespersons (or, in Project jargon, K.O.D's, key organization decisionmakers): simply calling into an organization and asking whom one would talk to about a particular problem, and asking outside lawyers for the name of their contact person at the organization.

25 The “sitting in the village square” technique can be used to study microcosms of the large urban environment (see Felstiner and Williams, 1978; Buckle and Thomas-Buckle, 1981), but such studies look only at a very limited (though perhaps frequent) aspect of the disputing universe.

For references cited in this article, see p. 883.