This essay (serialized here across 24 separate posts) uses words and numbers to discuss the uses of words and numbers — particularly examining evaluations of university degrees that employ statistical data to substantiate competing claims. Statistical analyses are crudely introduced as the mode du jour of popular logic, but any ratiocinative technique could likely be inserted in this re-fillable space and applied to create and defend categories of meaning with or without quantitative support. Questions posed across the series include: Is the data informing or affirming what we believe? What are the implications of granting this approach broader authority? The author, Melanie Williams, graduated from UA in 2006, with a B.A. in Anthropology and Religious Studies.
Nate Silver makes it clear that his preference for Bayes’ theorem is a conscientious nod to the role of uncertainty in any depiction of the future. His attempts to mitigate uncertainty, we can imagine, are addressed in his model; his accounting for it is evident in the probabilities in which he lays out his forecasts, acknowledging some estimate of chance either way of getting it right or getting it wrong – if we could say, in a sense, that Nate Silver could be “wrong.” Because each candidate’s odds are expressed in probabilities, Silver calculates his misses into his projections. You could say when an election result falls within the minority of his probability set, it’s a playing out of what he had always acknowledged was a possibility – which is just about the best form of hedging anyone might devise. And yet, his renown turns on his getting it right where others were mistaken – using polling data available to all of the pundits attempting to forecast the election results. So what gives? Why is it so difficult to find the “signal” within the “noise?”
“The reason has to do with the disconnect between the perfection of nature and our very human imperfections in measuring and understanding it.” The Signal and the Noise, p. 242.
Is there a flawless syntax of nature we misconstrue with our coarse grammar? Why do we speak of “raw data?” “Nature’s perfection?” “Pure chance?” “Flawed models?” “Incomplete information” or “failed intelligence?” Is this how “true” data leads to various conclusions? When we propose a perfect natural law to which we might aspire to gain an imperfect understanding, are we really giving ourselves carte blanche to defend our ideologies in the throes of our “perfect” intentions but “imperfect” humanity? Any application of statistics in service of such “perfect” motives is likely not disinterested. The Perfection we can never reach “perfectly” lends an exalted sense of virtue to an otherwise blatant struggle to do just the opposite: to enlist the authority with which we imbue the perfections of nature to endorse our imperfect competing claims. This dichotomy, in other words, between the immaculate external and our vulgar subjectivity is doing a bit of political work for us, conveniently sterilizing assertions for which we no longer need to be accountable. Thus, for example, the suggestion that data can speak for itself may be a galvanizing leap forward for some; for others, it seems an attempt to elevate a given enterprise beyond wider scrutiny. I don’t think Nate Silver’s conclusions will be more correct than the data-speak approach because he is using a model; nor do I believe his political forecasts can be de-politicized through the legerdemain of conditional probability. But he is conscientiously addressing specific issues in a specific way using a specific technique, plodding though the unsexy and mundane work of calculating P(A|B) = P(B|A) P(A)/ P(B), then sending his forecasts into the cloud. In contrast to much of his rhetoric, he makes no inscrutable truth claim. It may be; maybe not. And I admire that. As for his success – well. Perhaps it’s some combination of hard work, skill, and luck. Like the rest of us.
Part 14 coming today at noon…