On Beginnings: Part 5

This essay (serialized here across 24 separate posts) uses words and numbers to discuss the uses of words and numbers — particularly examining evaluations of university degrees that employ statistical data to substantiate competing claims. Statistical analyses are crudely introduced as the mode du jour of popular logic, but any ratiocinative technique could likely be inserted in this re-fillable space and applied to create and defend categories of meaning with or without quantitative support. Questions posed across the series include: Is the data informing or affirming what we believe? What are the implications of granting this approach broader authority? The author, Melanie Williams, graduated from UA in 2006, with a B.A. in Anthropology and Religious Studies.

 

Are there other areas where the application of Bayesian inference might seem more dubious than Nate Silver – who offers it as a universal method of assessing data – suggests?  Let’s venture away from numbery things and into, say, the proceedings of a criminal trial, where an impartial judge mediates a contest of vignettes between professional raconteurs before an audience of peers tasked with deciding which version of justice is the just-iest..  A criminal trial sounds straightforward in theory, and a fair candidate for Bayesian inference.  A criminal trial in practice, however, is rarely so amenable to strict forms of logic.  The trial process in the U.S. implies a possibility of guilt, but does not assign a probability to that possibility – in fact, ascribing such a prior probability, even a low one, seems anathema to the idea of “innocent until proven guilty.”  Suppose we formulated a prior anyway, and refined our hunch as we rated each discursive nugget.  The conclusions we reached would likely have as much to do with the performance of each counsel as with the supposed validity of the evidence.  Even a forensic tool as widely trusted as DNA analysis relies primarily on compelling but controvertible statistical correlation.  Attorneys must convince a jury to rule in their favor, not through a dry review of “facts,” but through a dramatic presentation of bits of data as evidence of a particular narrative.  To this end, statistics can be used to produce as easily as assuage uncertainty, making use of the workaday assumption that numbers carry a weight granting either side some degree of authority.  Given the role of statistics in shaping these narratives, I would argue that Bayesian inference is not a useful tool for a juror – not because any other form of reasoning should supersede it, but because it seems problematic to approach what is basically a contest of persuasion as though it could be held to the clean rigors of a mathematical computation.  One could rather argue that the vagaries of language serve to grease the cogs of our clunky justice system, in that we rely on ambiguity, nuance, and a plurality of “truths” to assert, and likewise to challenge, suppositions.  Opposing sides will argue diametric points, after all, with equal conviction, in an attempt to elicit the sort of visceral reactions that will prod jurors to reach a consensus the certainty of which they will not be expected to quantify – when deliberating any given charge, we are simply expected to put someone into the unequivocal categories of “innocent” or “guilty” beyond some vague notion of “reasonable doubt.”  A guilty verdict of “85% sure,” after all, means there is a 15% chance someone is innocent, and how will you ignore that significant prospect in the face of Henry Fonda’s valid point?  How much chance will a defendant have of securing an appeal when a jury returns a Bayesian “98%” or “99%” guilty verdict?  Are such numbers more precise appraisals of the “truth?”  And if that “truth” turned out not to be so “true,” how much more difficult would it be to overturn?

Part 6 coming today at noon…