N of 1: In Defense of the Particular

The N of 1

Case study research and writing is often considered anecdotal—just storytelling.  Rarely is it considered equal to Big Data: N of 1,000,000.  And we’re even less likely to consider the case study as having explanatory power (Longhofer, Floersch, Hartmann, 2017; Flyvbjerg, 2001; Steinmetz, 2004). At the same time the case study is widely used in business, medicine, nursing, psychology, sociology, anthropology, psychoanalysis, and history as an explanatory and descriptive tool.

We seek big data to imagine we can predict behavior (see our post on science and prediction in open systems). Amazon monitors your purchases to produce algorithms on consumer habits and then sells advertisements linked to purchase history.  Video game companies develop models and algorithms designed to keep children (and adults) IN THE GAME. Big Data and associated technologies have led to the end of privacy (Miller, 2019).

Please take time to listen to two very interesting podcasts by Malcolm Gladwell, Revisionist History: The Standard Case and Descend into the Particular

The mathematician,  Cathy O’Neil, in her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, writes about our turn toward Big Data:

The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domains: mathematicians and computer scientist. Their verdicts, even when wrong or harmful, where beyond dispute or appeal. The they tended to punish the poor and the oppressed in our society, while making the rich richer (2017, p. 3).

bell jar closed system.png

Many researchers dream of the perfect dataset, one with FEW missing variables and many data points on client use of social services.  With the perfect “imagined” dataset, the hope is to predict (see our post on a Caring Science and prediction) outcomes of interventions and show how a policy or practice worked (Cartwright & Hardie 2012). Rarely do these same researchers ask: worked for whom?  It is important to understand here that much of social work research, though certainly not all, is driven by a pragmatist epistemology and ontology. Pragmatism insists: what is is what works. Here’s the problem: what works is always in someone’s or some institutional/class interest. Take for example the research (especially for treatment) on the novel-coronavirus. Big Pharma companies are all scrambling to see who can get there first and profit. In the video below you see Congresswoman Katie Porter grill a Big Pharma executive about his avarice and the company greed (we’ll no doubt see the same for covid-19 drugs).


They rarely identify the mechanisms producing the behavior or effects, nor do they provide counterfactuals; that is, they do not show how different mechanisms and structures produce different or same effects in different social or historical contexts (see Nancy Cartwright’s work on this subject). Moreover, we are led to believe that big data produces certainty.  In other words, if something has happened over and over again, in millions of cases, of course, the 1,000,001 case will be the same or similar. The logical fallacy: post hoc ergo propter hoc. Clearly Big Data can be used to produce meaningful results in the natural and social sciences. What matters is how it is deployed in the human sciences.  Is it deployed to answer the question: what matters to us?  Is it deployed to explain how things work so we can understand how to change unwanted determinations (e.g., how racism is produced and reproduced).   Most big data findings are based on a false belief: that we live in closed systems where variables can be controlled, time stopped, as if we live in a glass jar.  

Epidemiology has been used to identify how viruses spread in groups using large datasets and in explaining a single case with explanatory power (e.g., cholera).   During the pandemic of 2020, we’ve learned yet again about the limitations of these methods: they cannot predict how actors in open system social environments might—or might not— choose to wear a mask, might not abide by public health recommendations to maintain social distance, or how particular bodies will respond to the virus or the treatment of the virus. We again return to the case study. Big Data was not useful in helping us see how politics trumped science in the OPEN SYSTEM.

Indeed, case studies teach us, again and again, that there are no universally effective interventions; there is practice, and the practice of theorizing about practice. As self-reflective practitioners, case studies help us focus on a problem, question, or paradox that has arisen in our interventions: why do some wear masks and others not? In using and teaching the case study method we have learned that there is not one essential story of success or organizational outcome, where each practitioner uses the same method to lead clients to inevitably better places. In place of a uniform method, theory, or single narrative arc, there are many and sometimes competing methods and theories (Willemsen et al. 2015) and a range of outcomes—some clear successes, to be sure, but most ending in a more ambiguous place: the client still very much in the process of learning how to contend with life’s full array of challenges and difficulties. (for those interested in clinical case studies and mental health see the Single Case Study Archive).

Moreover, we have realized the necessary role humility plays in caregiving, since absent a single, totalizing theory or methodology that can account for all human behavior, cognition, and emotion, caregivers are left with the inescapable fact that expertise doesn’t mean always being right; it means always being open to the possibility that during any given client intervention, some important detail can go unnoticed, some vital question can go unasked, some significant shaping force can go undetected. We are fallible (see our post on science and caring). The reflective practitioner doesn’t seek to conceal the possibility of fallibility, but rather assumes that fallibility is foundational in an interpretive practice (and open systems) that is as much an art as it is science.

Thus, case studies are not one story retold, but many stories. Not one outcome in many different contexts, but many different outcomes, emerging from detailed accounts of practitioners establishing relations with clients. Neither does the case study necessarily reflect one perspective, or even multiple perspectives, but instead a commitment to multiperspectivalism.   Knowledge produced by case studies contributes to the professions by providing richly detailed, complex accounts of individual instances of suffering, flourishing, and recovery. And the case study, across the disciplines, serves many purposes: explanatory, interpretive, understanding.

It is through the case study that we can best understand the things that matter most to people (Flyvbjerg 2001; Longhofer and Floersch 2012, 2014). We believe that it is through the case that we can be truly engaged with those we serve, that is, engaged scholarship (Longhofer and Floersch 2012, 2014; Van de Ven 2007). It is in the particularities of the case (see podcasts above) that value propositions and normative claims can be made transparent and considered; indeed, the case study may be crucial to our ongoing dialogue with our communities of interest, individual and collective. And it is through the case that our clients and communities come to know researchers’ motivations and intentions, values, and proposed actions. It is through the case study that we’re most likely to escape what Pierre Bourdieu (1990, 2000) called the scholastic fallacy (see our recent blog post) the many ways that our methods and means of knowledge production obscure the things that matter most to people.

Yes, we can use big datasets to understand caring, caring practices, and how to care; but only if we understand its power is not in prediction but in description. It’s useful to know about trends in social work and in society and likely very useful in identifying a social problem: lack of access to affordable health care, for example. But it rarely helps us understand the mechanisms producing the lack of affordable health care, or structural racism. Here international comparative cases are essential to understanding the larger trends. Comparison of how a single payer system works in one country with another would help us identify mechanisms (e.g., market domination vs. market regulation). At the very least, big data, or variable analysis, needs to complement case study research and writing. When it comes to caring, it is particularly important that we understand how open systems produce different caring outcomes. Caring is just that complex.

 

References

Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford University Press.

Flyvbjerg, B. (2001). Making social science matter. Social Science and Policy Challenges: Democracy, Values, and Capacities. Cambridge: Cambridge University Press.

Longhofer, J., Floersch, J., & Hartmann, E. (2017). A case for the case study: How and why they matter. Clinical Social Work Journal45(3), 189-200.

Miller, R. E. (2019). On the End of Privacy: Dissolving Boundaries in a Screen-Centric World. Pittsburgh: University of Pittsburgh Press.

O'neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Broadway Books.

Steinmetz, G. (2004). Odious comparisons: Incommensurability, the case study, and “small N's” in sociology. Sociological theory22(3), 371-400.

Previous
Previous

The Project

Next
Next

Is Knowing Essential to Caring?