On Beginnings: Part 6

This essay (serialized here across 24 separate posts) uses words and numbers to discuss the uses of words and numbers — particularly examining evaluations of university degrees that employ statistical data to substantiate competing claims. Statistical analyses are crudely introduced as the mode du jour of popular logic, but any ratiocinative technique could likely be inserted in this re-fillable space and applied to create and defend categories of meaning with or without quantitative support. Questions posed across the series include: Is the data informing or affirming what we believe? What are the implications of granting this approach broader authority? The author, Melanie Williams, graduated from UA in 2006, with a B.A. in Anthropology and Religious Studies.

 

If you’re still here, forgive me.  I don’t claim to grasp nor be in any position to explain the finer points of calculating probability, which is beyond my purview (if you have further interest, allow me to suggest any number of excellent and more expert books on the topic.)  I only mention Bayesian priors in reference to Nate Silver’s methods in order to point out that he uses a method.  Silver’s predictions, however you classify them, use a model to process data selected by him to arrive at a conclusion – a conclusion that is the result of his operation upon what he has chosen to pay attention to.  Nate Silver, in short, is using statistical data to calculate probabilities.  The forecasts he derives from those calculations we can call a variety of statistical inference.  Since his Bayesian approach relies on probabilities, it may prove less helpful in systems of increasing uncertainty and complexity, when implications of given variables are not limited to known sets – a courtroom being an example of such a complex (social) setting, in which statistical data may conveniently suit a purpose more than “unveil a truth.”  And yet, even within the bounds he has drawn for his analyses, Silver’s success is predicated on his competitors’ “getting it wrong,” using the same data sets with the same spectrum of outcomes.  If, as Silver suggests, there is a “signal” hidden in the “noise” of statistical data (terms lifted from the lingo of electrical engineers), why can’t everyone concoct a model to predict the winners of political elections?  Statistical data that feed widely varying conclusions suggest, to me, that such inferences have more in common with the rhetorical techniques used in a courtroom than with calculating how many blue cars may drive through my town on any given day.

What sort of statistical techniques are my linen-clad internet moguls using to distinguish between the “signal” and the “noise?”  How is statistical data used in the broader forums of academia, research, or popular culture?  Whose interpretations of statistical data are we reading in any given analysis?  To what extent do those interpretations or models reflect the interests of the interpreter?  And how are we to tell?  Persuasion is a craft in which statistical inference can be employed to various effect, and we may not always know what we are being persuaded to believe – not least because the presenters themselves rarely agree on the portent of any given set of figures.  The countless articles I have read debating the value of a college degree in the years since I graduated, for instance, also employ meticulous methods of data-gathering and sophisticated tools of statistical inference, yet oddly seem to use the same data sets to support any number of contrary verdicts.  In a courtroom, a judge is the arbiter of what may be admitted as evidence; what constitutes data in these various articles, surveys, and studies?  Who decides?  How is the data organized and assessed, and by whom?

Part 7 coming tomorrow morning…