On Beginnings: Part 4

This essay (serialized here across 24 separate posts) uses words and numbers to discuss the uses of words and numbers — particularly examining evaluations of university degrees that employ statistical data to substantiate competing claims. Statistical analyses are crudely introduced as the mode du jour of popular logic, but any ratiocinative technique could likely be inserted in this re-fillable space and applied to create and defend categories of meaning with or without quantitative support. Questions posed across the series include: Is the data informing or affirming what we believe? What are the implications of granting this approach broader authority? The author, Melanie Williams, graduated from UA in 2006, with a B.A. in Anthropology and Religious Studies.

 

Nate Silver’s political forecasting methods employ a permutation of conditional probability known amongst Number Munchers as Bayesian inference.  In the 18th century, Thomas Bayes authored an unpublished essay offering a method for advancing inquiries in the face of undefined variables, using probabilities. Silver’s application of Bayes’ theorem falls under the subjectivist umbrella of its use, in which an experiential hunch is assigned some initial, arbitrary degree of likelihood, called the prior probability.  We could just call the prior a hypothesis – one with specific odds of being borne out by observation.  Observations themselves then “condition” the prior, either supporting or refuting the hypothesis according to the value each bit of data brings to the equation, expressed in probabilities.  These conditional probabilities of data, more often called “evidence,” are calculated somewhat circularly, given some combination of objective and subjective measures of the likelihood of the hypothesis predicting the evidence, and the likelihood of the evidence implying the hypothesis.  These conditional probabilities are then compounded with the prior probability to yield a new likelihood of the hypothesis, accounting for the evidence that may have implied the hypothesis that might have predicted the evidence.  That’s right.  This process can continue in the same helical form until any adequate series of adjusted probabilities (called posteriors) lead one to accept, redefine, or discard the prior.  Bayes’ theorem has many specialized derivatives, but is best known for allowing the user to estimate a range of likelihoods with meager information, because it assigns both hypothesis and conditional data those interdependent, never-100% probabilities, from which the composite probability of the outcome is calculated – the idea being that each trial will either contradict or confirm your prior, allowing you to update and refine its likelihood as your trials progress.  Contrast this to the frequentist approach championed by Ronald Fisher in the 1920’s, in which a theory with no initial value slowly acquires credibility over a series of rigorous trials, the results of which must be filtered through their p-values:  a standard measure of the odds that any deviation from the expected values of those results would be the product of chance alone.  Each method seems to have its advantages – the frequentist one claiming more objectivity and a process of validation that relies more on exhaustive trials and peer review; the Bayesian one allowing the user to pursue a line of inquiry well into the realm of unknowns, using a pliant estimate of likelihoods to alternately spur or yoke a premise.  Bayesian inference seems to offer more flexibility in this regard, as an informal, ad hoc tool for assessing chance; it also has the benefit of smelling like what most of us would consider plain vanilla reason, wherein our existing beliefs and values can be tweaked in light of new circumstances while continuing to inform the decisions we make as we go along.  As a forecasting tool, Bayesian inference would appear to have its limits – most notably, when a prior probability is not available, in the face of unprecedented events or complex circumstances.  I would trust Bayesian priors, for instance, if I were trying to remember where I parked my bike last night.  I would not trust Bayesian inference to predict what will happen if I press this button.

Part 5 coming tomorrow morning…