This essay (serialized here across 24 separate posts) uses words and numbers to discuss the uses of words and numbers — particularly examining evaluations of university degrees that employ statistical data to substantiate competing claims. Statistical analyses are crudely introduced as the mode du jour of popular logic, but any ratiocinative technique could likely be inserted in this re-fillable space and applied to create and defend categories of meaning with or without quantitative support. Questions posed across the series include: Is the data informing or affirming what we believe? What are the implications of granting this approach broader authority? The author, Melanie Williams, graduated from UA in 2006, with a B.A. in Anthropology and Religious Studies.
“We can perhaps never know the truth with 100% certainty, but making correct predictions is the way to tell if we’re getting closer.” The Signal and the Noise, p. 255.
You can probably guess what I’m going to ask next: What is randomness? Is it the quirky and irreducible unpredictability of nature, at the level of the smallest particle? Or is randomness a temporary inconvenience, an illusion of uncertainty that is really just a lamentable lack of data? It may seem like a tedious argument, or a matter of semantics, or completely irrelevant to the topic at hand – but our reception of statistical data has a lot to do with how we define and describe randomness, and how much agency we give it in our wider senses of the universe. If you believe in a deterministic connect-the-dot universe, what seems like chance is rather an unfortunate consequence of the dots we don’t yet see, but can build a better means of detecting. If you believe in stochastic processes as a fundamental concept of natural phenomena, we can’t say for sure what the hell the dots are doing, or if they can be said to “be” there at all.
Positions on randomness, then, play a role in determining the ways we construct and process truth claims: Will an essential truth reveal itself with persistent observation? Or do we simply “count and name whatever lies upon the special lines we trace?” Such rivalries are tied into both revelatory and scientific attempts to distinguish “imperfect” individuals from “perfect” truths – to present truths as attractive but impenetrable external realities, rather than motivated efforts to universalize contested ideas. I should mention, again, that I am not questioning the scientific processes of observing, modeling, and testing the larger world. I hope we have established that statistical approaches are useful for studying the accretion of ideas we term natural laws. Should the same tools be used to lend a mathematical-type authority to all spheres of discourse? Or should we see certain applications as rhetorical devices stamping the imprimatur of scientific autonomy on ideological claims – much as we defer to any enigmatic jurisdiction to dissociate ourselves from our interests when we wish to approach some (external) (essential) (unassailable) truth without being implicated in our assertions? What sort of claims are we making when we don’t have the benefit of a few petabytes of data? Or when, as is more often the case, data is not just “data?” If we gather enough input, can we determine whether you will thrive in the sharp-elbowed world of real estate investment, or default on your mortgage? Can we determine whether you are more or less “intelligent?” Whether you will Buy 2 Get 1 Half Off? Whether you are a terrorist or a patriot? Whether you are congenitally inclined to succeed or fail? And if we did, would our conclusions be the “truth” to which the “data” led us? Numbers, after all, never lie.
Part 18 coming today at noon…