Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-18T09:16:51.020Z Has data issue: false hasContentIssue false

The elephant's other legs: What some sciences actually do

Published online by Cambridge University Press:  05 February 2024

Jonathan Baron*
Affiliation:
Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA jonathanbaron7@gmail.com https://www.sas.upenn.edu/baron
*
*Corresponding author.

Abstract

Integrative experiments, as described, seem blindly empirical, as if the question of generality of effects could not be understood through controlled one-at-a-time experiments. But current research using such experiments, especially applied research, can resolve issues and make progress through understanding of cause–effect pathways, leaving to engineers the task of translating this understanding into practice.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Almaatouq et al. claim that the sciences of interest are about theory development. I'm not sure what “theory” means to them. Possibly the term refers to something like a prediction that X will affect Y. The term could also refer to a causal explanation, for example, “people tend to favor harms of omission over harms of action because they think of omission as default and use a heuristic of favoring default” or “because they attend more to actions.” The explanations are usually chains of events, some of which are mental. Questions about causal explanations are often addressed in the one-at-a-time paradigm, with careful controls, and some of these questions are answered (e.g., Baron & Ritov, Reference Baron and Ritov1994).

The one-at-a-time paradigm largely concerns the existence of effects, not their generality (Baron, Reference Baron2010). Often the subject population of interest is “human beings,” and, I hope, most of them are not yet born. In such cases, all samples are convenience samples. We can do small tests of generality of causal effects. Usually they generalize pretty well, in direction if not in magnitude. Once we know that a causal process exists, we can ask further questions about it to understand it better. Anything we learn through these experiments will probably apply to the process when it exists. But existence, of effects and causal chains, is all we can ever learn from experiments, including the integrative experiments proposed.

Some of these integrative experiments discussed pit causal effects against each other. Because the magnitude of effects depends on all sorts of things, it is not clear that we can conclude with much generality which one wins. Indeed, group synergy might work one way in some situations and the opposite in other situations, but it is not clear that the sample space is sufficient to test all possibilities, and, moreover the mere finding that effects are present in some part of the space does not tell us why. The results seem blindly empirical. By contrast, the one-at-a-time approach, when properly applied, can increase our understanding of how things work.

Even when causal effects do not compete, estimation of their relative magnitude will depend on the space of possibilities (as well as who the subjects are, as the article notes). For example, moral judgments about autonomous vehicles may yield quite different results from moral judgments about bioethics.

A different sense of the term “theory” refers to explanations that tie together diverse phenomena that might at first seem to be unrelated. Freud's theory of unconscious motivation (still at work behind the scenes in social psychology, despite its disappearance from most textbooks) was an example. Other theories are more limited in what they explain, such as the idea that many errors result from substitution of judgments of one attribute for judgments about another, which is usually correlated with the first (Kahneman & Frederick, Reference Kahneman, Frederick, Gilovich, Griffin and Kahneman2002; also Baron, Reference Baron1973). In some of these theories, the claim is that something happens in many cases, but we do not know which ones. It is something to look for. Similarly for the “germ theory of disease,” which says that, in trying to find out the cause of a disease, it is a good idea to look for very small organisms. In such cases, integrative design competes with the alternative approach of exploring more examples, such as diseases caused by toxicity or genetic abnormalities. Brute-force empiricism would be unlikely to discover or explain such cases.

In applied sciences, such as medicine, broad theory can help, as the “germ theory of disease,” but this sort of theory, for the most part, is neither absolute nor completely general, unlike what physics tries to do. Most medical research is about the etiology and treatment of particular disorders, one-at-a-time, although sometimes a discovery can apply to several similar disorders.

Examples abound in psychology of increased understanding that results from analysis of particular applied problems. A great deal of modern social psychology arose historically out of attempts to understand the rise of fascism. Some of the cognitive psychology of attention and vigilance arose from the study of radar operators in World War II (Garner, Reference Garner1972). Recent research on judgment arose out of attempts to measure the nonmarket value of the harm caused by the Exxon-Valdez oil spill (Kahneman, Ritov, Jacowitz, & Grant, Reference Kahneman, Ritov, Jacowitz and Grant1993). Research on forecasting was spurred by attempts to understand the failures of intelligence agencies (Dhami, Mandel, Mellers, & Tetlock, Reference Dhami, Mandel, Mellers and Tetlock2015). Research on risk perception was provoked by perceived over- and under-regulation of risk (Breyer, Reference Breyer1993; Slovic, Reference Slovic1987). In these cases and many others, we have learned a lot. Sometimes institutions have even changed their decision-making procedures in response to what we have learned.

Applied research in medicine and psychology often involves experimental understanding of phenomena such as disorders or biases. Such understanding informs the efforts of engineers (in the broad sense that includes designers of administrative procedures, decision procedures, systems of psychotherapy, and human–machine interfaces). Engineers try to get things to work by a cycle of build–test–build–test and so forth. The practice of decision analysis, for example, has built on laboratory results such as those concerning the difficulty of assigning weights to attributes (von Winterfeldt & Edwards, Reference von Winterfeldt and Edwards1986). Similar relations between basic one-at-a-time research and application are the work on “nudges” (Thaler & Sunstein, Reference Thaler and Sunstein2008), cognitive behavior therapy (Beck, Reference Beck1979), forecasting (Dhami et al., Reference Dhami, Mandel, Mellers and Tetlock2015), and literacy (Treiman, Reference Treiman1992). Often, as in the last two cases, ultimate applications run into political or institutional resistance.

This sort of research is not based on data alone but also on understanding of what kinds of causal links are plausible. Such understanding often comes from background knowledge from a variety of fields, including (for psychology) philosophy, linguistics, computer science, biology, and politics. Understanding of a phenomenon neither comes from blindly empirical research, nor even from careful controlled experiments uninformed by background knowledge.

Competing interest

None.

References

Baron, J. (1973). Semantic components and conceptual development. Cognition, 2, 189207.CrossRefGoogle Scholar
Baron, J. (2010). Looking at individual subjects in research on judgment and decision making (or anything). Acta Psychologica Sinica, 42, 111.CrossRefGoogle Scholar
Baron, J., & Ritov, I. (1994). Reference points and omission bias. Organizational Behavior and Human Decision Processes, 59, 475498.CrossRefGoogle Scholar
Beck, A. T. (Ed.). (1979). Cognitive therapy of depression. Guilford Press.Google Scholar
Breyer, S. (1993). Breaking the vicious circle: Toward effective risk regulation. Harvard University Press.Google Scholar
Dhami, M. K., Mandel, D. R., Mellers, B. A., & Tetlock, P. E. (2015). Improving intelligence analysis with decision science. Perspectives on Psychological Science, 10(6), 753757.CrossRefGoogle ScholarPubMed
Garner, W. R. (1972). The acquisition and application of knowledge: A symbiotic relation. American Psychologist, 27(10), 941946.CrossRefGoogle Scholar
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In Gilovich, T., Griffin, D., & Kahneman, D. (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 4981). Cambridge University Press.CrossRefGoogle Scholar
Kahneman, D., Ritov, I., Jacowitz, K. E., & Grant, P. (1993). Stated willingness to pay for public goods: A psychological perspective. Psychological Science, 4(5), 310315.CrossRefGoogle Scholar
Slovic, P. (1987). Perception of risk. Science (New York, N.Y.), 236, 280285.CrossRefGoogle ScholarPubMed
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.Google Scholar
Treiman, R. (1992). Beginning to spell: A study of first-grade children. Oxford University Press.Google Scholar
von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioral research. Cambridge University Press.Google Scholar