Seminars by Term:

Sort by year:
September 10, 2018

Branden Fitelson, Northeastern University, Department of Philosophy

Two Approaches to Belief Update


There are two dominant paradigms in the theory of qualitative belief change. While belief revision theory attempts to describe the way in which rational agents revise their beliefs upon gaining new evidence about an essentially static world, the theory of belief update is concerned with describing how such agents change their qualitative beliefs upon learning that the world is changing in some way.

A similar distinction can be made when it comes to describing the way in which a rational agent changes their subjective probability assignments, or `credences', over time. On the one hand, we need a way to describe how these credences evolve when the agent learns something new about a static environment. On the other hand, we need a way to describe how they evolve when the agent learns that the world has changed.

According to orthodoxy, the correct answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about a static world are given by the axiomatic AGM theory of belief revision and Bayesian conditionalisation, respectively. Now, under the influential Lockean theory of belief, an agent believes a proposition p if and only if their credence in p is sufficiently high (where what counts as `sufficiently high' is determined by some threshold value t 2 (1=2; 1]). Thus, assuming a Lockean theory of belief, Bayesian conditionalization de nes an alternative theory of qualitative belief revision, where p is in the revised belief set if and only if the agent's posterior credence in p is above the relevant threshold after conditionalising on the new evidence. Call this theory of belief revision `Lockean revision'. The relationship between Lockean revision and the AGM theory of belief revision was systematically described by Shear and Fitelson (forthcoming).

With regards to belief updating, the most widely accepted answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about how the world is changing over time are given by Katsuno and Mendelzon's axiomatic theory of belief update (KM-update) and Lewis's technique of probabilistic imaging, respectively. In this sequel to our study of the relationship between Bayesian (viz., Lockean revision) and AGM revision, we investigate the relationships between Bayesian (viz., Lockean imaging) KM updating.

The latest draft can be downloaded:

September 17, 2018

Bill Benter, Avenue Four Analytics

Horse Sense


Wagering on horse racing is an unforgiving arena in which the bettor’s probability estimates are tested against betting odds set by public consensus via the pari-mutuel system.

The underlying event, the horse race, is a complex real world phenomenon whose outcome is determined principally by the laws of physics and biology combined with a certain amount of random variation. Overlaid on this physical process is a complex milieu of jockeys, trainers and owners whose changing proclivities can also affect race outcomes. A race can be represented with a state-space model in which the race outcome is a stochastic function of the state parameters. Complicating the situation further, most of the parameters of interest (e.g. the fitness of a particular horse or the skill of a jockey) cannot be observed directly but must be inferred from past race results.

The large takeout (~17%) levied by the racetrack means that a would-be successful bettor needs to identify situations wherein the true probability of a bet winning is at least ~17% higher than that implied by the betting odds. Racetrack betting markets are largely efficient in the sense used to describe financial markets in that the betting odds are highly informative and largely unbiased estimators of the horses’ probabilities of winning. However in the speaker’s experience, probability lies in the eye of the beholder. A practical methodology will be described whereby superior probability estimates can be produced which result in long term betting profits.

September 24, 2018

Mario Hubert, Columbia University, Philosophy Department

How Statistical Explanations Rely on Typicality


Typicality has been widely used in statistical mechanics in order to help explaining the approach to equilibrium of thermodynamic systems. I aim at showing that typicality reasoning qualifies for many situations beyond statistical mechanics. In particular, I show how regression to the mean rely on typicality and how probabilites arise from typicality.

October 1, 2018

Rohit Parikh, Brooklyn College of CUNY, and CUNY Graduate Center.

The logic of the personal world


In his work on the inner world, Jakob von Uexkull describes how a creature like a tick, or a dog, or a child sees the world and how it moves inside its personal world. That world is called by him the umwelt of that creature. We adults also have our umwelts, but unlike a tick or a dog we enrich our umwelt by consulting the umwelts of other people and using science. A tick has its own, meager logic. A dog or a baby has a richer logic and we adults have a richer logic still. How do these logics work, especially since the first two logics are outside language? Can we still speak of the inferences made by a dog? An insect? Can we use our language to describe their inferences? Uexkull anticpiated many of the ideas current in Artificial Intelligence. The example of the Wumpus, popular in AI literature is anticipated by Uexkull with his example of a tick.

October 8, 2018

Nozer Singpurwalla, City University of Hong Kong, Department of Systems Engineering and Engineering Management, and Department of Management Science

Entropy, Information, and Extropy in the Courtroom and a Hacker's Bedroom


We start by motivating as to how the notion of information arose, and how it evolved, via the idealistic scenario of a courtroom, and that of a hacker trying to break a computer's password. We then introduce the notion of Shannon entropy as a natural consequence of the basic Fisher-Hartley idea of self-information, and subsequently make the charge that Shannon took a giant leap of faith when he proposed his famous, and well lubricated, formula. A consequence is that Shannon's formula overestimates the inherent uncertainty in a random variable. We also question Shannon's strategy of taking expectations and suggest alternatives to it based on the Kolmogorov-Nagumo functions for the mean of a sequence of numbers. In the sequel, we put forward the case that the only way to justify Shannon's formula is to look at self-information as a utility in a decision theoretic context. This in turn enables an interpretation for the recently proposed notion of "extropy". We conclude our presentation with the assertion that a complete way to evaluate the efficacy of a predictive distribution (or a mathematical model) is by the tandem use of both entropy and extropy.

October 15, 2018

Robin Hanson, George Mason University, Department of Economics

Uncommon Priors Require Origin Disputes


In a standard Bayesian belief model, the prior is always common knowledge. This prevents such a model from representing agents’ probabilistic beliefs about the origins of their priors. By embedding such a standard model in a larger standard model, however, we can describe such beliefs using "pre-priors". When an agent’s prior and pre-prior are mutually consistent in a particular way, she must believe that her prior would only have been different in situations where relevant event chances were different, but that variations in other agents’ priors are otherwise completely unrelated to which events are how likely. Thus Bayesians who agree enough about the origins of their priors must have the same priors.

The paper can be found here:

October 22, 2018

Harry Crane, Rutgers University, Department of Statistics

Replication Crisis, Prediction Markets, and the Fundamental Principle of Probability


I will discuss how ideas from the foundations of probability can be applied to resolve the current scientific replication crisis. I focus on two specific approaches:

1. Prediction Markets to incentivize more accurate assessments of the scientific community’s confidence that a result will replicate.
2. The Fundamental Principle of Probability to incentivize more accurate assessments of the author’s confidence that their own results will replicate.

I compare and contrast the merits and drawbacks of these two approaches.

The article associated to this talk can be found at

Lecture Handout

Robin Hanson's response on his blog


October 29, 2018

Robin Gong, Rutgers University, Department of Statistics

Modeling uncertainty with sets of probabilities


Uncertainty in real life takes many forms. An analyst can hesitate to specify a prior for a Bayesian model, or be ignorant of the mechanism that gave rise to the missing data. Such kinds of uncertainty cannot be faithfully captured by a single probability, but can be by a set of probabilities, and in special cases by a capacity function or a belief function.

In this talk, I motivate sets of probabilities as an attractive modeling strategy, which encodes low-resolution information in both the data and the model with little need to concoct unwarranted assumptions. I present a prior-free belief function model for multinomial data, which delivers posterior inference as a class of random convex polytopes. I discuss challenges that arise with the employment of belief and capacity functions. Specifically, how the choice of conditioning rules reconciles among a trio of unsettling posterior phenomena: dilation, contraction and sure loss. These findings underscores the invaluable role of judicious judgment in handling low-resolution probabilistic information.

November 5, 2018

David Bellhouse, University of Western Ontario

The Emergence of Actuarial Science in the Eighteenth Century


In 1987, historian of science Lorraine Daston wrote about the impact of mathematics on the nascent insurance industry in England.

"Despite the efforts of mathematicians to apply probability theory and mortality statistics to problems in insurance and annuities in the late seventeenth and early eighteenth centuries, the influence of this mathematical literature on the voluminous trade in annuities and insurance was negligible until the end of the eighteenth century."

This view is a standard one in the history of insurance today. There is, however, a small conundrum attached to this view. Throughout the eighteenth century, several mathematicians were writing books about life annuities, often long ones containing tables requiring many hours of calculation. The natural question is: who were they writing for and why? Themselves? For the pure academic joy of the exercise? The answer that I will put forward is that mathematicians were not writing for the insurance industry, but about something else – life contingent contracts related to property. These included valuing leases whose terms were based on the lives of three people, marriage settlements and reversions on estates. As in many other situations, it took a crisis to change the insurance industry. This happened in the 1770s when many newly-founded companies offered pension-type products that were grossly underfunded. This was pointed out at length in layman’s terms by Richard Price, the first Bayesian. The crisis reached the floor of the House of Commons before the mathematicians won the day.

November 12, 2018

Eddy Keming Chen, Rutgers University, Philosophy Department

On the Fundamental Probabilities in Physics


There are two sources of randomness in our fundamental physical theories: quantum mechanical probabilities and statistical mechanical probabilities. The former are crucial for understanding quantum effects such as the interference patterns. The latter are important for understanding thermodynamic phenomena such as the arrow of time. It is standard to postulate the two kinds of probabilities independently and to take both of them seriously. In this talk, I will introduce a new framework for thinking about quantum mechanics in a time-asymmetric universe, for which the two kinds of probabilities can be reduced to just one. We will then consider what that means for the Mentaculus Vision (Loewer 2016) and the phenomena that I call “nomic vagueness.” Time permitting, we will also briefly compare and contrast my theory with some other proposals in the literature by Albert (2000), Wallace (2012), and Wallace (2016).

We will not assume detailed knowledge of physics, and we will introduce the necessary concepts in the first half of the talk. For optional reading, please see:

November 19, 2018

Doron Zeilberger, Rutgers University, Mathematics Department

An Ultra-Finitistic Foundation of Probability


Probability theory started on the right foot with Cardano, Fermat and Pascal when it restricted itself to finite sample spaces, and was reduced to counting finite sets. Then it got ruined by attempts to come to grips with that fictional (and completely superfluous) 'notion' they called 'infinity'.

A lot of probability theory can be done by keeping everything finite, and whatever can't be done that way, is not worth doing. We live in a finite world, and any talk of 'infinite' sample spaces is not even wrong, it is utterly meaningless. The only change needed, when talking about an 'infinite' sequence of sample spaces, say of n coin tosses, {H,T}^n, for 'any' n, tacitly implying that you have an 'infinite' supply of such n, is to replace it by the phrase 'symbolic n'.

This new approach is inspired by the philosophy and ideology behind symbolic computation. Symbolic computation can also redo, ab initio, without any human help, large parts of classical probability theory.

November 26, 2018

Arthur Van Camp, Ghent University, Department of Electronics and Information Systems

Choice functions as a tool to model uncertainty


Choice functions constitute a very general and simple mathematical framework for modelling choice under uncertainty. In particular, they are able to represent the set-valued choices that typically arise from applying decision rules to imprecise-probabilistic uncertainty models. Choice functions can be given a clear behavioural interpretation in terms of attitudes towards gambling.

I will introduce choice functions as a tool to model uncertainty, and connect them with sets of desirable gambles, a very popular but less general imprecise-probabilistic uncertainty model. Once this connection is in place, I will focus on two important devices for both models. First, I will discuss about performing conservative inferences with both models. Second, I will discuss how both models can cope with assessments of symmetry and indifference.
December 3, 2018

Deborah G. Mayo, Virginia Tech University, Philosophy Department

Statistical Inference as Severe Testing (How it Gets You Beyond the Statistics Wars)


High-profile failures of replication in the social and biological sciences underwrites a minimal requirement of evidence: If little or nothing has been done to rule out flaws in inferring a claim, then it has not passed a severe test. A claim is severely tested to the extent it has been subjected to and passes a test that probably would have found flaws, were they present. This minimal severe-testing requirement leads to reformulating significance tests (and related methods) to avoid familiar criticisms and abuses. Viewing statistical inference as severe testing–whether or not you accept it–offers a key to understand and get beyond the statistics wars.

Bio: Deborah G. Mayo is Professor Emerita in the Department of Philosophy at Virginia Tech and is a visiting professor at the London School of Economics and Political Science, Centre for the Philosophy of Natural and Social Science. She is the author of Error and the Growth of Experimental Knowledge (Chicago, 1996), which won the 1998 Lakatos Prize awarded to the most outstanding contribution to the philosophy of science during the previous six years. She co-edited Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (CUP, 2010) with Aris Spanos, and has published widely in the philosophy of science, statistics, and experimental inference. She will co-direct a summer seminar on Philosophy of Statistics, intended for philosophy and social science faculty and post docs, July 28-August, 2019.

Link to the proofs of the first Tour of Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (Mayo 2018, CUP)

January 28, 2019

E. Glen Weyl, Microsoft Research

Political Economy for Increasing Returns


The very existence of civilization implies that typically many people must together be able to produce more value than the sum of what they could each produce independently. Yet neoliberal capitalism is wildly inefficient in the presence of such “increasing returns” and standard democracy is far too rigid to address this failure. Drawing on novel, approximately optimal market mechanisms for increasing returns, I will sketch what a political economy adapted to increasing returns would look like. Nearly all consumption would be in the form of public goods and most private property would belong to a range of communities. A rich plurality of emergent public good-providing communities would replace democratic states and monopolistic corporations. The distinction between economics and political science, and between methodological individualism and communitarianism, would dissolve. This vision suggests many intellectual directions and a social ideology I call Liberal Radicalism.

February 4, 2019



February 11, 2019

Ole Peters, London Mathematical Laboratory and Santa Fe Institute

The ergodicity problem in economics


The ergodicity problem queries the equality or inequality of time averages and expectation values. I will trace its curious history, beginning with the origins of formal probability theory in the context of gambling and economic problems in the 17th century. This is long before ergodicity was a word or a known concept, which led to an implicit assumption of ergodicity in the foundations of economic theory. 200 years later, when randomness entered physics, the ergodicity question was made explicit. Over the past decade I have asked what happens to foundational problems in economic theory if we export what is known about the ergodicity problem in physics and mathematics back to economics. Many problems can be resolved. Following an overview of our theoretical and conceptual progress, I will report on a recent experiment that strongly supports our view that human economic behavior is better described as optimizing time-average growth rates of wealth than as optimizing expectation values of wealth or utility of wealth.

February 18, 2019

Jennifer Carr, University of California, San Diego

A Modesty Proposal


Accuracy-first epistemology aims to show that the norms of epistemic rationality, including probabilism and conditionalization, can be derived from the effective pursuit of accuracy. This paper explores the prospects within accuracy-first epistemology for vindicating “modesty”: the thesis that ideal rationality permits uncertainty about one’s own rationality. I give prima facie arguments against accuracy-first epistemology’s ability to accommodate three forms of modesty: uncertainty about what priors are rational, uncertainty about whether one’s update policy is rational, and uncertainty about what one’s evidence is. I argue that the problem stems from the representation of epistemic decision problems. The appropriate representation of decision problems, and corresponding decision rules, for (diachronic) update policies should be a generalization of decision problems and decision rules used in the assessment of (synchronic) coherence.

February 25, 2019

No Seminar


March 4, 2019

Glenn Shafer, Rutgers, Business School

The Language of Betting as a Strategy for Communicating Statistical Results


Our vocabulary for statistical testing is too complicated. Even statistics teachers and scientists who use statistics answer questions about p-values incorrectly. We can communicate statistical results better using the language of betting.

Betting provides
• a simple frequentist interpretation of likelihood ratios, significance levels, and p-values,
• a framework for multiple testing and meta-analysis.

Complex problems require carefully defined betting games. See Game-Theoretic Foundations for Probability and Finance, (Wiley, May 2019).

The betting language also helps us avoid the fantasy of multiple unseen worlds when interpreting probabilistic models in science.

March 11, 2019

Dan Bouk, Colgate University, History Department

Making Statistical Individuals at the Turn of the Twentieth Century: How Insurance Corporations Numbered Americans' Days and Valued Their Lives


Statistics describe big groups. Probability works best for large numbers. For much of the history of both fields, statistics and probability had little to say about particular individuals. Today, in contrast, Big Data enthusiasts thrill at the promise that more data and more sophisticated methods can lead to better predictions of individuals' futures. This talk looks at one important time and place where the concept of the statistical individual developed: the life insurance industry in the late-nineteenth and early-twentieth century. This talk focuses on the work of actuaries, doctors, statisticians, and others in and around the U.S. life insurance industry who developed techniques to value lives, describe bodies as "risks," and justify existing forms of racism, inequality, and discrimination.

March 25, 2019

Harry Crane, Rutgers University, Statistics Department

A Formal Model for Intuitive Probabilistic Reasoning


I propose a formal framework for intuitive probabilistic reasoning (IPR). The proposed system aims to capture the informal, albeit logical, process by which individuals justify beliefs about uncertain claims in legal argument, mathematical conjecture, scientific theorizing, and common sense reasoning. The philosophical grounding and mathematical formalism of IPR takes root in Brouwer's mathematical intuitionism, as formalized in intuitionistic Martin-Lof type theory (MLTT) and homotopy type theory (HoTT). Formally, IPR is distinct from more conventional treatments of subjective belief, such as Bayesianism and Dempster-Shafer theory. Conceptually, the approaches share some common motivations, and can be viewed as complementary.

Assuming no prior knowledge of intuitionistic logic, MLTT, or HoTT, I discuss the conceptual motivations for this new system, explain what it captures that the Bayesian approach does not, and outline some intuitive consequences that arise as theorems. Time permitting, I also discuss a formal connection between IPR and more traditional theories of decision under uncertainty, in particular Bayesian decision theory, Dempster--Shafer theory, and imprecise probability.

H. Crane. (2018). The Logic of Probability and Conjecture. Researchers.One.

H. Crane. (2019). Imprecise probabilities as a semantics for intuitive probabilistic reasoning. Researchers.One.

H. Crane and I. Wilhelm. (2019). The Logic of Typicality. In Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature (V. Allori, ed.). Available at Researchers.One.

April 1, 2019

Richard Pettigrew, Bristol University, Department of Philosophy

Accuracy and the Laws of Credence


An agent’s degrees of belief should satisfy the axioms of probability. She should update her degrees of belief in the light of new evidence in line with the Bayesian rule of conditionalization. If she learns the objective chances, her degrees of belief ought to match them. In the absence of any evidence, she ought to distribute her degrees of belief equally over all possibilities. These are norms that govern epistemic agents when we represent them as having degrees of belief in the propositions they entertain. What establishes these norms? Pragmatic arguments have been given for some; evidentialist arguments for others. In this talk, I want to describe an alternative sort of argument. It begins with the claim that the sole fundamental virtue of degrees of belief is their accuracy, or proximity to the truth, and it provides a way of measuring this accuracy. Finally, it derives the consequences of this assumption. Amongst those consequences are the four norms just listed.

April 8, 2019

Simon DeDeo, Carnegie Mellon University and the Santa Fe Institute

Predictive Brains, Dark Rooms, and the Origins of Life


A coincidence between thermodynamic quantities such as entropy and free energy, and epistemic quantities such as compression and coding efficiency, has led a number of physicists to wonder if apparently material features of our world are simply states of knowledge. In 2005, Karl Friston, a neuroscientist with physicist sympathies, turned this ontological claim to eleven with the introduction of the predictive brain theory. This is a radical approach to cognitive science that understands organisms, in a perception-action loop with the environment, devoted to minimizing prediction error. This error is quantified as the free energy between an organism's sensory states and its environment. Because free energy is both an epistemic and a physical quantity, it may be possible to derive not just cognition, but life itself, from purely epistemic considerations and without the introduction of an additional fitness or utility function. A central difficulty for this theory has been the Dark Room Problem: such minimizers, it seems, would prefer actions that fuzz out sensory data and avoid opportunities to revise their theories. This leads to a paradox, because organisms can not become better predictors if they do not make the mistakes that help them to learn. I present recent results that show how, by contrast, predictive brains are curious configurations that naturally explore the world, it turns out, leaving dark rooms because the physical drive to minimize free energy makes them take decisive, theory-simplifying actions. I argue that many fields that rely on a utility function for their predictive power may be able to do without it: free energy minimizers might not just be successful organisms or good reasoners, for example, but also good at running a business.

April 15, 2019

Dustin Lazarovici, Université de Lausanne, Section de Philosophie

Arrows of Time without a Past Hypothesis


The talk will discuss recent attempts by Sean Carroll and Julian Barbour to account for the thermodynamic arrow of time in our universe without a Past Hypothesis, i.e., the assumption of a special (low-entropy) initial state. In this context, I will also propose the definition of a Boltzmann entropy for a classical gravitating system and argue that it may provide a relevant example of a "Carroll universe".


April 22, 2019

Anya Farennikova, CUNY

Probabilistic Perception


Scientists and philosophers have been debating whether perception is probabilistic. However, the debate is conflating different senses in which perception is probabilistic. In this talk, I'm going to sort those senses out and review new (and unexpected) evidence for probabilistic perception from split-vision experiments and anomalous conscious states. This evidence sheds new light on the concept of probabilistic perception and helps answer the challenge of how perceptual experience can have probabilistic phenomenology.

April 29, 2019

John Wu, Rutgers University, Physics Department

Deep learning an astrophysics: galaxy scaling relations


I will discuss some applications deep convolutional neural networks (convnets), and present a high-level overview of how to select, optimize, and interpret convnet models. We have trained a convnet to recognize a galaxy's chemical abundance using only an image of the galaxy; the traditional approach using spectroscopy requires at least an order of magnitude more telescope time and achieves a comparable level of accuracy to our method. We discover that the convnet can recover an empirically known scaling relation that connects galaxies' chemical enrichment and star formation histories with zero additional scatter, implying that there exists (and that the convnet has learned) a novel representation of the chemical abundance that is strongly linked to its optical-wavelength morphology.

May 6, 2019

Kevin Dorst, MIT, Philosophy Department

Evidence of Evidence: A Higher-Order Approach


"Evidence of evidence is evidence" (EEE) is a slogan that has stirred much recent debate in epistemology. The intuitive idea seems straightforward: if you have reason to think that there is evidence supporting p, then---since what's supported by evidence is likely to be true---you thereby have (some) reason to think that p. However, formulating precise, nontrivial versions of this thesis has proven difficult. In this paper we propose to do so using a higher-order approach---a framework that lets us model (higher-order) opinions about what opinions you should have, i.e. opinions about what opinions your evidence warrants. This framework allows us to formulate propositions about your evidence as objects of uncertainty, and therefore to formulate principles connecting evidence about evidence for p to evidence about p. Drawing on a general theory of rational higher-order uncertainty developed elsewhere, we examine which versions of EEE principles are tenable---showing that although many are not, several strong ones are. If these details are correct, then it has (broadly conciliationist) implications for the peer disagreement debate that started the EEE discussion. And regardless of the details, we hope to show that a higher-order approach is fruitful for formulating and testing precise versions of the "evidence of evidence is evidence" slogan.

September 11, 2017

Nassim Nicholas Taleb, NYU, Tandon School of Engineering

Central problems with probability


1) Confusion at the level of the payoff functions (convexity matters), 2) Confusion concerning the Law of Large Numbers, 3) Misuse of the notion of probability. for more details and papers.

September 18, 2017

Eddy Chen, Rutgers, Department of Philosophy

Our Knowledge of the Past: Some Puzzles about Time’s Arrow and Self-Locating Probabilities


Why is there an apparent arrow of time? The standard answer, due to Ludwig Boltzmann and developed by the contemporary Boltzmannians, attributes its origin in a special boundary condition on the physical space-time, now known as the “Past Hypothesis.” In essence, it says that the “initial” state of the universe was in a very orderly (low-entropy) state. In this talk, I would like to consider an alternative theory, motivated by the (in)famous Principle of Indifference. I will argue that the two theories, at least in some cosmological models, are in fact empirically on a par when we consider their de se (self-locating) content about where we are in time. As we shall see, our comparative study leads to an unexpected skeptical conclusion about our knowledge of the past. We will then think about what this means for the general issue in philosophy of science about theory choice and pragmatic considerations.

September 25, 2017

Nina Emery, Mount Holyoke College, Department of Philosophy

The Explanatory Role Argument for Deterministic Chance


One common reason given for thinking that there are non-trivial objective probabilities—or chances—in worlds where the fundamental laws are deterministic is that such probabilities play an important explanatory role. I examine this argument in detail and show that insofar as it is successful it places significant constraints on the further metaphysical theory that we give of deterministic chance.

October 2, 2017

Jacob Feldman, Rutgers University, Department of Psychology and Center for Cognitive Science

Subjective probability in Bayesian cognitive science


The last twenty years has seen an enormous rise in Bayesian models of cognitive phenomena, which posit that human mental function is approximately rational or optimal in nature. Contemporary theorizing has begun to settle on a “Common Wisdom”, in which human perception and cognition are seen as approximately optimal relative to the objective probabilities in the real world — “the statistics of the environment,” as it is often put. However traditional philosophy of probability in Bayesian theory generally assumes an epistemic or subjectivist conception of probability, which holds that probabilities are characteristics of observers’ states of knowledge, and do not have objective values — which implies, contrary to the Common Wisdom, that there is actually no such thing as an objective observer-independent “statistics of the environment.” In this talk I will discuss why exactly Bayesians have historically favored the subjectivist attitude towards probability, and why cognitive science should as well, highlighting some of the inconsistencies in current theoretical debate in cognitive science. My aim is partly to criticize the current state of the field, but mostly to point to what I see as a more productive way in which a subjective conception of probability can inform models of cognition.

October 9, 2017

Alison Fernandes, University of Warwick

Do Humean Reductions of Chance Justify the Principal Principle?


Objective chances are used to guide credences and in scientific explanations. Knowing there’s a high chance that the smoke in the room disperses, you can both infer that it will, and explain why it does. Defenders of ‘Best Systems’ and other Humean accounts (Lewis, Loewer, Hoefer) claim to be uniquely well placed to account for both features. These accounts reduce chance to non-modal features of reality. Chances are therefore objective and suitable for use in scientific explanations. Because Humean accounts reduce chance to patterns in actual events, they limit the possible divergence between relative frequencies and chances. Agents who align their credences with known chances are then guaranteed to do reasonably well when predicting events. So it seems Humean accounts can justify principles linking chance to credence such as Lewis’ Principal Principle. But there’s a problem. When used in scientific explanations, Humean chances and relative frequencies must be allowed to diverge to arbitrarily high degrees. So if we consider the scientific question of whether agents who align their credences to the (actual) Humean chances will do well, it is merely probable they will. The scientific use of chance undercuts the advantage Humeans claim over their rivals in showing how chance and credence principles are justified. By seeing how, we clarify the role of chance−credence principles in accounts of chance.

October 16, 2017

Christopher Phillips, Carnegie Mellon, History Department

Number the Stars: Baseball Statistics, Scouts, and the History of Data Analysis


Baseball has seemingly become a showcase for the triumph of statistical and probabilistic analysis over subjective, biased, traditional knowledge--the expertise of scorers replacing that of scouts. Little is known, however, about the way scorers and scouts actually make assessments of value. Over the twentieth century, scouts, no less than scorers, had to express their judgments numerically--the practices of scorers and scouts are far more similar than different. Through the history of judgments of value in baseball, we can come to a deeper understanding about the nature of expertise and numerical objectivity, as well as the rise of data analysis more broadly.

October 23, 2017

Herbert Weisberg, Causalytics, LLC and Correlation Research, Inc.

Probability, Paradox, Protocol, and Personalized Medicine


In Willful Ignorance: The Mismeasure of Probability (Wiley, 2014) I traced the evolution of (additive) probability from its humble origins in games of chance to its current dominance in scientific and business activity. My main thesis was that mathematical probability is nothing more nor less than a way to quantify uncertainty by drawing an analogy with a “metaphorical lottery.” In some situations, this hypothetical lottery can be more complex than simply drawing a ball from an urn. In that case, the resulting probabilities are based on a protocol, essentially a set of procedures that define precisely how such a lottery is being performed. Absent an explicit protocol, there may be considerable ambiguity and confusion about what, if anything, the probability statement really means. I believe that many philosophical debates about foundational issues in statistics could be illuminated by thoughtful elucidation of implicit protocols. Attention to such protocols is increasingly important in the context of Big Data problems. I will conclude with a rather surprising application of these ideas to the analysis of individualized causal effects.

October 30, 2017

Daniel Kahneman, Princeton University, Pyschology Department

A Conversation with Daniel Kahneman


Glenn Shafer interviews Daniel Kahneman.

November 6, 2017

Jessica John Collins, Columbia University, Philosophy Department

Imaging and Instability


Causal decision theory (CDT) has serious difficulty handling asymmetric instability problems (Richter 1984, Weirich 1985, Egan 2007). I explore the idea that the key to solving these problems is Isaac Levi’s thesis that "deliberation crowds out prediction" i.e. that agents cannot assign determinate credences to their currently available options. I defend the view against recent arguments of Alan Hájek’s and sketch an imaging-based version of CDT with indeterminate credences and expected values. I suggest that imaging might be thought of as the hypothetical revision method appropriate to making true rather than learning true and argue that CDT should be seen not as a rival to orthodox decision theory, but simply as a more permissive account of the norms of rationality.

November 13, 2017

Glenn Shafer, Rutgers Business School

How speculation can explain the equity premium


When measured over decades in countries that have been relatively stable, returns from stocks have been substantially better than returns from bonds. This is often attributed to investors' risk aversion.

In the theory of finance based on game-theoretic probability, in contrast, time-rescaled Brownian motion and the equity premium both emerge from speculation. This explanation accounts for the magnitude of the premium better than risk aversion.

See Working Paper 47 at Direct link is

November 20, 2017

Jenann Ismael, University of Arizona, Philosophy Department

On Chance (or, Why I am only a half-Humean)


The main divide in the philosophical discussion of chances, is between Humean and anti-Humean views. Humeans think that statements about chance can be reduced to statements about patterns in the manifold of actual fact (the ‘Humean Mosaic’). Non-humeans deny that reduction is possible. If one goes back and looks at Lewis’ early papers on chance, there are actually two separable threads in the discussion: one that treats chances as recommended credences and one that identifies chances with patterns in the manifold of categorical fact. I will defend a half humean view that retains the first thread and rejects the second.

The suggestion wiill that what the Humean view can be thought of as presenting the patterns in the Humean mosaic as the basis for inductive judgments built into the content of probabilistic belief. This could be offered as a template for accounts of laws, capacities, dispositions, and causes – i.e., all of the modal outputs of Best System style theorizing. In each case, the suggestion will be, these are derivative quantities that encode inductive judgments based on patterns in the manifold of fact. They extract projectible regularities from the pattern of fact and give us belief-forming and decision-making policies that have a general, pragmatic justification.

November 27, 2017

Harry Crane, Rutgers, Department of Statistics and Biostatistics

Why "redefining statistical significance" will make the reproducibility crisis worse


A recent proposal to "redefine statistical significance" (Benjamin, et al. Nature Human Behaviour, 2017) claims that false positive rates "would immediately improve" by factors greater than two and fall as low as 5%, and replication rates would double simply by changing the conventional cutoff for 'statistical significance' from P<0.05 to P<0.005. I will survey several criticisms of this proposal and also analyze the veracity of these major claims, focusing especially on how Benjamin, et al neglect the effects of P-hacking in assessing the impact of their proposal. My analysis shows that once P-hacking is accounted for the perceived benefits of the lower threshold all but disappear, prompting two main conclusions:

(i) The claimed improvements to false positive rate and replication rate in Benjamin, et al (2017) are exaggerated and misleading.

(ii) There are plausible scenarios under which the lower cutoff will make the replication crisis worse.

My full analysis can be downloaded here.

December 4, 2017

Elke Weber, Princeton University, Psychology and Public Affairs

Query Theory: A Process Account of Preference Construction


Psychologists and behavioral economists agree that many of our preferences are constructed, rather than innate or pre-computed and stored. Little research, however, has explored the implications that established facts about human attention and memory have when people marshal evidence for their decisions. This talk provides an introduction to Query Theory, a psychological process model of preference construction that explains a broad range of phenomena in individual choice with important personal and social consequences, including our reluctance to change also known as status-quo-bias and our excessive impatience when asked to delay consumption.

January 22, 2018

Charles Randy Gallistel, Rutgers University, Department of Psychology

The Perception of Probability


Human and non-human animals estimate the probabilities of events spread out in time. They do so on the basis of a record in memory of the sequence of events, not by the event-by-event updating of the estimate. The current estimate of the probability is the byproduct of the construction of a hierarchical stochastic model for the event sequence. The model enables efficient encoding of the sequence (minimizing memory demands) and it enables nearly optimal prediction (The Minimum Description Length Principle). The estimates are generally close to those of an ideal observer over the full range of probabilities. Changes are quickly detected. Human subjects, at least, have second thoughts about their most recently detected change, revising their opinion in the light of subsequent data, thereby retroactively correcting for the effects of garden path sequences on their model. Their detection of changes is affected by their estimate of the probability of such changes, as it should be. Thus, a sophisticated mechanism for the perception of probability joins the mechanisms for the perception of other abstractions, such as duration, distance, direction, and numerosity, as a foundational and evolutionarily ancient brain mechanism.

January 29, 2018

Andrew Gelman, Columbia University, Department of Statistics and Political Science

Bayes, statistics, and reproducibility


The two central ideas in the foundations of statistics--Bayesian inference and frequentist evaluation--both are defined in terms of replications. For a Bayesian, the replication comes in the prior distribution, which represents possible parameter values under the set of problems to which a given model might be applied; for a frequentist, the replication comes in the reference set or sampling distribution of possible data that could be seen if the data collection process were repeated. Many serious problems with statistics in practice arise from Bayesian inference that is not Bayesian enough, or frequentist evaluation that is not frequentist enough, in both cases using replication distributions that do not make scientific sense or do not reflect the actual procedures being performed on the data. We consider the implications for the replication crisis in science and discuss how scientists can do better, both in data collection and in learning from the data they have.

February 5, 2018

Volodya Vovk, University of London

Game-theoretic probability for mathematical finance


My plan is to give an overview of recent work in continuous-time game-theoretic probability and related areas of mainstream mathematical finance, including stochastic portfolio theory and capital asset pricing model. Game-theoretic probability does not postulate a stochastic model but various properties of stochasticity emerge naturally in various games, including the game of trading in financial markets. I will argue that game-theoretic probability provides an answer to the question “where do probabilities come from?” in the context of idealized financial markets with continuous price paths. This talk is obviously related to the talk given by Glenn Shafer in Fall about equity premium, but it will be completely self-contained and will concentrate on different topics.

February 12, 2018

Ioannis Karatzas, Columbia University

Mathematical Aspects of Arbitrage


We introduce models for financial markets and, in their context, the notions of "portfolio rules" and of "arbitrage". The normative assumption of "absence of arbitrage" is central in the modern theories of mathematical economics and finance. We relate it to probabilistic concepts such as "fair game", "martingale", "coherence" in the sense of deFinetti, and "equivalent martingale measure".

We also survey recent work in the context of the Stochastic Portfolio Theory pioneered by E.R. Fernholz. This theory provides descriptive conditions under which arbitrage, or "outperformance", opportunities do exist, then constructs simple portfolios that implement them. We also explain how, even in the presence of such arbitrage, most of the standard mathematical theory of finance still functions, though in somewhat modified form.

February 19, 2018

Jean Baccelli, Munich Center for Mathematical Philosophy

Act-State Dependence, Moral Hazard, and State-Dependent Utility


I will present ongoing work on the behavioral identification of beliefs in Savage-style decision theory. I start by distinguishing between two kinds of so-called act-state dependence. One has to do with moral hazard, i.e., the fact that the decision-maker can influence the resolution of the uncertainty to which she is exposed. The other has to do with non-expected utility, i.e., the fact that the decision-maker does not, in the face of uncertainty, behave like an expected utility maximizer. Second, I introduce the problem of state-dependent utility, i.e., the challenges posed by state-dependent utility to the behavioral identification of beliefs. I illustrate this problem in the traditional case of expected utility, and I distinguish between two aspects of the problem—the problem of total and partial unidentification, respectively. Third, equipped with the previous two distinctions, I examine two views that are well established in the literature. The first view is that expected utility and non-expected utility are equally exposed to the problem of state-dependent utility. The second view is that any choice-based solution to this problem must involve moral hazard. I show that these two views must be rejected at once. Non-expected utility is less exposed than expected utility to the problem of state-dependent utility, and (as I explain: relatedly) there are choice-based solutions to this problem that do not involve moral hazard. Building on this conclusion, I re-assess the philosophical and methodological significance of the problem of state-dependent utility.

February 26, 2018

Isaac Wilhelm, Rutgers University, Department of Philosophy

Typical: A Theory of Typicality and Typicality Explanation


Typicality is routinely invoked in everyday contexts: bobcats are typically four-legged; birds can typically fly; people are typically less than seven feet tall. Typicality is invoked in scientific contexts as well: typical gases expand; typical quantum systems exhibit probabilistic behavior. And typicality facts like these---about bobcats, birds, and gases---back many explanations, both quotidian and scientific. But what is it for something to be typical? And how do typicality facts explain? In this talk, I propose a general theory of typicality. I analyze the notions of typical sets, typical properties, and typical objects. I provide a formalism for typicality explanations, drawing on analogies with probabilistic explanations. Along the way, I put the analyses and the formalism to work: I show how typicality can be used to explain a variety of phenomena, from everyday phenomena to the statistical mechanical behavior of gases.

March 5, 2018

David Papineau, Kings College London and City University of New York

Correlations, Causes, and Actions


I shall examine the currently popular ‘interventionist’ approach to causation, and show that, contrary to its billing, it does not explain causation in terms of the possibility of action, but solely in terms of objective population correlations.

March 19, 2018

Mike Titelbaum, University of Wisconsin, Philosophy Department

Ranged Credence, Dilation, and Three Features of Evidence


The philosophical literature has recently developed a fondness for working not just with numerical credences, but with numerical ranges assigned to propositions. I will discuss why a ranged model offers useful flexibility in representing agents' attitudes and appropriate responses to evidence. Then I will discuss why complaints based on "dilation" effects—especially recent puzzle cases from White (2010) and Sturgeon (2010)—do not present serious problems for the ranged credence approach.

March 26, 2018

Nick DiBella, Stanford University, Philosophy Department

Qualitative Probability and Infinitesimal Probability


Infinitesimal probability has long occupied a prominent niche in the philosophy of probability. It has been employed for such purposes as defending the principle of regularity, making sense of rational belief update upon learning evidence of classical probability 0, modeling fair infinite lotteries, and applying decision theory in infinitary contexts. In this talk, I argue that many of the philosophical purposes infinitesimal probability has been enlisted to serve can be served more simply and perspicuously by appealing instead to qualitative probability--that is, the binary relation of one event's being at least as probable as another event. I also discuss results that show that qualitative probability has comparable (if not greater) representational power than infinitesimal probability.

April 2, 2018

Ryan Martin, North Carolina State University

Probability dilution, false confidence, and non-additive beliefs


In the context of statistical inference, data is used to construct degrees of belief about the quantity of interest. If the beliefs assigned to certain hypotheses tend to be large, not because the data provides supporting evidence, but because of some other structural deficiency, then inferences drawn would be questionable. Motivated by the paradoxical probability dilution phenomenon arising in satellite collision analysis, I will introduce a notion of false confidence and show that all additive belief functions have the aforementioned structural deficiency. Therefore, in order to avoid false confidence, a certain class of non-additive belief functions are required, and I will describe these functions and how to construct them.

April 9, 2018

Alex Meehan and Snow Zhang, Princeton University

Chance and independence: do we need a revolution?


What does it mean for two chancy events A and B to be independent? According to the standard analysis, A and B are independent just in case Ch(A and B)=Ch(A)Ch(B). However, this analysis runs into a problem: it implies that a chance-zero event is independent of itself. To get around this issue, Fitelson and Hajek (2017) have recently proposed an alternative analysis: Ch(A)=Ch(A|B). Going one step further, they argue that Kolmogorov's formal framework, as a whole, can't do justice to this new analysis. In fact, they call for a "revolution" in which we "bring to an end the hegemony of Kolmogorov's axiomatization".

We begin by motivating Fitelson and Hajek's initial worry about independence via examples from scientific practice, in which independence judgments are made concerning chance-zero events. Then, we turn to defend Kolmogorov from Fitelson and Hajek's stronger claim. We argue that, at least for chances, there is a motivated extension of Kolmogorov's framework which can accommodate their analysis, and which also does a decent job of systematizing what the scientists are doing. Thus, the call for a "revolution" may be premature.

April 16, 2018

Sean Carroll, California Institute of Technology, Physics Department

Locating Yourself in a Large Universe


Modern physics frequently envisions scenarios in which the universe is very large indeed: large enough that any allowed local situation is likely to exist more than once, perhaps an infinite number of times. Multiple copies of you might exist elsewhere in space, in time, or on other branches of the wave function. I will argue for a unified strategy for dealing with self-locating uncertainty that recovers the Born Rule of quantum mechanics in ordinary situations, and suggests a cosmological measure in a multiverse. The approach is fundamentally Bayesian, treating probability talk as arising from credences in conditions of uncertainty. Such an approach doesn't work in cosmologies dominated by random fluctuations (Boltzmann Brains), so I will argue in favor of excluding such models on the basis of cognitive instability.

April 23, 2018

Kenny Easwaran, Texas A & M

Countable additivity - and beyond?


While countable additivity is a requirement of the orthodox mathematical theory of probability, some theorists (notably Bruno de Finetti and followers) have argued that only finite additivity ought to be required. I point out that using merely finitely-additive functions actually brings in *more* infinitary complexity rather than less. If we must go beyond finite additivity to avoid this infinitary complexity, there is a question of why to stop at countable additivity. I give two arguments for countable additivity that don't generalize to go further.

April 30, 2018

Ted Porter, UCLA, Department of History

How Human Genetics Was Shaped by Data on Madness


September 12, 2016

Glenn Shafer, Rutgers University, Business School

Calibrate p-values by taking the square root


For nearly 100 years, researchers have persisted in using p-values in spite of fierce criticism. Both Bayesians and Neyman-Pearson purists contend that use of a p-value is cheating even in the simplest case, where the hypothesis to be tested and a test statistic are specified in advance. Bayesians point out that a small p-value often does not translate into a strong Bayes factor against the hypothesis. Neyman-Pearson purists insist that you should state a significance level in advance and stick with it, even if the p-value turns out to be much smaller than this significance level. But many applied statisticians persist in feeling that a p-value much smaller than the significance level is meaningful evidence. In the game-theoretic approach to probability (see my 2001 book with Vladimir Vovk, described at, you test a statistical hypothesis by using its probabilities to bet. You reject at a significance level of 0.01, say, if you succeed in multiplying the capital you risk by 100. In this picture, we can calibrate small p-values so as to measure their meaningfulness while absolving them of cheating. There are various ways to implement this calibration, but one of them leads to a very simple rule of thumb: take the square root of the p-value. Thus rejection at a significance level of 0.01 requires a p-value of one in 10,000.

September 19, 2016

Isaac Wilhelm, Rutgers University, Philosophy Department

A statistical analysis of luck


According to Pritchard's analysis of luck (PAL), an event is lucky just in case it fails to obtain in a sufficiently large class of sufficiently close possible worlds. Though there are several reasons to like the PAL, it faces at least two counterexamples. After reviewing those counterexamples, I introduce a new, statistical analysis of luck (SAL). The reasons to like the PAL are also reasons to like the SAL, but the SAL is not susceptible to the counterexamples.

September 26, 2016

Barry Loewer, Rutgers University, Philosophy Department

What probabilities there are and what probabilities are


The sciences, especially fundamental physics, contain theories that posit objective probabilities. But what are objective probabilities?

Are they fundamental features of reality as mass or charge might be? or do more fundamental facts, for example frequencies, ground probabilities?

In my talk I will survey some views about what probabilities there are and what grounds them.

October 3, 2016

Michael Strevens, New York University, Philosophy Department

Dynamic Probabilities and Initial Conditions


Dynamic approaches to understanding the foundations of physical probability in the non-fundamental sciences (from statistical physics through evolutionary biology and beyond) turn on special properties of physical processes that are apt to produce "probabilistically patterned" outcomes. I will introduce one particular dynamic approach of especially wide scope.

Then a problem: dynamic properties on their own are never quite sufficient to produce the observed patterns; in addition, some sort of probabilistic assumption about initial conditions must be made. What grounds the initial condition assumption? I discuss some possible answers.

October 10, 2016

Prakash Gorroochurn, Columbia University, Biostatistics Department

Fisher’s fiducial probability – a historical perspective


Of R.A. Fisher's countless statistical innovations, fiducial probability is one of the very few that has found little favor among probabilists and statisticians. Fiducial probability is still misunderstood today and rarely mentioned in current textbooks. This presentation will attempt to offer a historical perspective on the topic, explaining Fisher's motivations and subsequent oppositions from his contemporaries. The talk is based on my newly released book "Classic Topics on the History of Modern Mathematical Statistics: From Laplace to More Recent Times."

October 17, 2016

Teddy Seidenfeld, Carnegie Mellon University, Philosophy Department

A modest proposal to use rates of incoherence as a guide for personal uncertainties about logic and mathematics


It is an old and familiar challenge to normative theories of personal probability that they do not make room for non-trivial uncertainties about (the non-controversial parts of) logic and mathematics. Savage (1967) gives a frank presentation of the problem, noting that his own (1954) classic theory of rational preference serves as a poster-child for the challenge.

Here is the outline of this presentation:
     First is a review of the challenge.
     Second, I comment on two approaches that try to solve the challenge by making surgical adjustments to the canonical theory of coherent personal probability. One approach relaxes the Total Evidence Condition: see Good (1971). The other relaxes the closure conditions on a measure space: see Gaifman (2004). Hacking (1967) incorporates both of these approaches.
     Third, I summarize an account of rates of incoherence, explain how to model uncertainties about logical and mathematical questions with rates of incoherence, and outline how to use this approach in order to guide the uncertain agent in the use of, e.g., familiar numerical Monte Carlo methods in order to improve her/his credal state about such questions (2012).

Based on joint work with J.B.Kadane and M.J.Schervish

Gaifman, H. (2004) Reasoning with Limited Resources and Assigning Probabilities to Arithmetic Statements. Synthese 140: 97-119.
Good, I.J. (1971) Twenty-seven Principles of Rationality. In Good Thinking, Minn. U. Press (1983): 15-19.
Hacking, I. (1967) Slightly More Realistic Personal Probability. Phil. Sci. 34: 311-325.
Savage, L.J. (1967) Difficulties in the Theory of Personal Probability. Phil. Sci. 34: 305-310.
Seidenfeld, T., Schervish, M.J., and Kadane, J.B. (2012) What kind of uncertainty is that? J.Phil. 109: 516-533.

October 24, 2016

Alan Hajek, Australian National University, School of Philosophy

Staying Regular?


'Regularity' conditions provide bridges between possibility and probability. They have the form:

If X is possible, then the probability of X is positive (or equivalents). Especially interesting are the conditions we get when we understand 'possible' doxastically, and 'probability' subjectively. I characterize these senses of 'regularity' in terms of a certain internal harmony of an agent's probability space (omega, F, P). I distinguish three grades of probabilistic involvement. A set of possibilities may be recognized by such a probability space by being a subset of omega; by being an element of F; and by receiving positive probability from P. An agent's space is regular if these three grades collapse into one.

I review several arguments for regularity as a rationality norm. An agent could violate this norm in two ways: by assigning probability zero to some doxastic possibility, and by failing to assign probability altogether to some doxastic possibility. I argue for the rationality of each kind of violation.

Both kinds of violations of regularity have serious consequences for traditional Bayesian epistemology. I consider their ramifications for:

- conditional probability

- conditionalization

- probabilistic independence

- decision theory

October 31, 2016

Vladimir Vapnik, Facebook AI and Columbia University

Brute force and intelligent models of learning


This talk is devoted to a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The talk describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer.

In this talk I also will discuss a general ideas in philosophical foundation of induction and generalization related to the Huber's concept of falsifiability and to holistic methods of inference.

November 7, 2016

Adam Elga, Princeton University, Philosophy Department

Fragmented decision theory


Bayesian decision theory assumes that its subjects are perfectly coherent: logically omniscient and able to perfectly access their information. Since imperfect coherence is both rationally permissible and widespread, it is desirable to extend decision theory to accommodate incoherent subjects. New 'no-go' proofs show that the rational dispositions of an incoherent subject cannot in general be represented by a single assignment of numerical magnitudes to sentences (whether or not those magnitudes satisfy the probability axioms). Instead, we should attribute to each incoherent subject a whole family of probability functions, indexed to choice conditions. If, in addition, we impose a "local coherence" condition, we can make good on the thought that rationality requires respecting easy logical entailments but not hard ones. The result is an extension of decision theory that applies to incoherent or fragmented subjects, assimilates into decision theory the distinction between knowledge-that and knowledge-how, and applies to cases of "in-between belief".

This is joint work with Agustin Rayo (MIT).

November 14, 2016

Jamie Pietruska , Rutgers University, Department of History

"Old Probabilities" and "Cotton Guesses": Weather Forecasts, Agricultural Statistics, and Uncertainty in the Late-Nineteenth and Early-Twentieth-Century United States


This talk, which is drawn from Looking Forward: Prediction and Uncertainty in Modern America (forthcoming, University of Chicago Press), will examine weather forecasting and cotton forecasting as forms of knowledge production that initially sought to conquer unpredictability but ultimately accepted uncertainty in modern economic life. It will focus on contests between government and commercial forecasters over who had the authority to predict the future and the ensuing epistemological debates over the value and meaning of forecasting itself. Intellectual historians and historians of science have conceptualized the late nineteenth century in terms of “the taming of chance” in the shift from positivism to probabilism, but, as this talk will demonstrate, Americans also grappled with predictive uncertainties in daily life during a time when they increasingly came to believe in but also question the predictability of the weather, the harvest, and the future.

November 21, 2016

Glenn Shafer , Rutgers University, Business School

Defensive forecasting


In game-theoretic probability, Forecaster gives probabilities (or upper expectations) on each round of the game, and Skeptic tests these probabilities by betting, while Reality decides the outcomes. Can Forecaster pass Skeptic's tests?

As it turns out, Forecaster can defeat any particular strategy for Skeptic, provided only that each move prescribed by the strategy varies continuously with respect to Forecaster's previous move. Forecaster wants to defeat more than a single strategy for Skeptic; he wants to defeat simultaneously all the strategies Skeptic might use. But as we will see, Forecaster can often amalgamate the strategies he needs to defeat by averaging them, and then he can play against the average. This is called defensive forecasting. Defeating the average may be good enough, because when any one of the strategies rejects Forecaster's validity, the average will reject as well, albeit less strongly.

This result has implications for the meaning of probability. It reveals that the crucial step in placing an evidential question in a probabilistic framework is its placement in a sequence of questions. Once we have chosen the sequence, good sequential probabilities can be given, and the validation of these probabilities by experience signifies less than commonly thought.

(1) Defensive forecasting, by Vladimir Vovk, Akimichi Takemura. and Glenn Shafer (Working Paper \#8 at
(2) Game-theoretic probability and its uses, especially defensive forecasting, by Glenn Shafer (Working Paper \#22 at

November 28, 2016

Elie Ayache, Ito 33

Writing the future


Derivative valuation theory is based on the formalism of abstract probability theory and random variables. However, when it is made part of the pricing tool that the 'quant' (quantitative analyst) develops and that the option trader uses, it becomes a pricing technology. The latter exceeds the theory and the formalism. Indeed, the contingent payoff (defining the derivative) is no longer the unproblematic random variable that we used to synthesize by dynamic replication, or whose mathematical expectation we used merely to evaluate, but it becomes a contingent claim. By this distinction we mean that the contingent claim crucially becomes traded independently of its underlying asset, and that its price is no longer identified with the result of a valuation. On the contrary, it becomes a market given and will now be used as an input to the pricing models, inverting them (implied volatility and calibration). One must recognize a necessity, not an accident, in this breach of the formal framework, even read in it the definition of the market now including the derivative instrument. Indeed, the trading of derivatives is the primary purpose of their pricing technology, and not a subsidiary usage. The question then poses itself of a possible formalization of this augmented market, or more simply, of the market. To that purpose we introduce the key notion of writing.

December 5, 2016

Ben Levinstein, Rutgers University, Philosophy Department

Higher-order evidence, Accuracy, and Information Loss


Higher-order evidence is evidence that you're handling information out of accord with epistemic norms. For instance, you may gain evidence that you're possibly drugged and can't think straight. A natural thought is that you respond by lowering your confidence that you got a complex calculation right. If so, HOE has a number of peculiar features. For instance, if you should take it into account, it leads to violations of Good's theorem and the norm to update by conditionalization. This motivates a number of philosophers to embrace the steadfast position: you shouldn't lower your confidence even though you have evidence you're drugged. I disagree. I argue that HOE is a kind of information-loss. This both explains its peculiar features and shows what's wrong with some recent steadfast arguments. Telling agents not to respond is like telling them never to forget anything.

December 12, 2016

Vladimir Vovk, University of London

Treatment of uncertainty in the foundations of probability


Kolmogorov's measure-theoretic axioms of probability formalize the Knightean notion of risk. Classical statistics adds a degree of Knightean uncertainty, since there is no probability distribution on the parameters, but uncertainty and risk are clearly separated. Game-theoretic probability formalizes the picture in which both risk and uncertainty interfere at every moment. The fruitfulness of this picture will be demonstrated by open theories in science and the emergence of stochasticity and probability in finance.

January 23, 2017

Shelly Goldstein, Rutgers University, Mathematics Department

Probability in Quantum Mechanics (and Bohmian Mechanics)


No abstract.

January 30, 2017

Alexander Stein, Brooklyn Law School

Behavioral Probability


Throughout their long history, humans have worked hard to tame chance. They adapted to their uncertain physical and social environments by using the method of trial and error. This evolutionary process made humans reason about uncertain facts the way they do. Behavioral economists argue that humans’ natural selection of their prevalent mode of reasoning wasn’t wise. They censure this mode of reasoning for violating the canons of mathematical probability that a rational person must obey.

Based on the insights from probability theory and the philosophy of induction, I argue that a rational person need not apply mathematical probability in making decisions about individual causes and effects. Instead, she should be free to use common sense reasoning that generally aligns with causative probability. I also show that behavioral experiments uniformly miss their target when they ask reasoners to extract probability from information that combines causal evidence with statistical data. Because it is perfectly rational for a person focusing on a specific event to prefer causal evidence to general statistics, those experiments establish no deviations from rational reasoning. Those experiments are also flawed in that they do not separate the reasoners’ unreflective beliefs from rule-driven acceptances. The behavioral economists’ claim that people are probabilistically challenged consequently remains unproven.

Paper can be downloaded here.

February 6, 2017

Branden Fitelson, Northeastern University, Philosophy Department

Two Approaches to Belief Revision


In this paper, we compare and contrast two methods for the qualitative revision of (viz., “full”) beliefs. The first (“Bayesian”) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second (“Logical”) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other -- especially in the special case where the agent’s belief sets are deductively cogent.

The latest draft can be downloaded:

February 13, 2017

Gretchen Chapman, Rutgers University, Psychology Department

Empirical Experiments on the Gambler's Fallacy


The gambler’s fallacy (GF) is a classic judgment bias where, when predicting events from an i.i.d. sequence, decision makers inflate the perceived likelihood of one outcome (e.g. red outcome from a roulette wheel spin) after a run of the opposing outcome (e.g., a streak of black outcomes). This phenomenon suggests that decision makers act as if the sampling is performed without replacement rather than with replacement. A series of empirical experiments support the idea that lay decision makers indeed have this type of underlying mental model. In an online experiment, MTurk participants drew marbles from an urn after receiving instructions that made clear that the marble draws were performed with vs. without replacement. The GF pattern appeared only under the without-replacement instructions. In two in-lab experiments, student participants predicted a series of roulette spins that were either grouped into blocks or ungrouped as one session. The GF pattern was manifest on most trials, but it was eliminated on the first trial of each block in the blocked condition. This bracketing result suggests that the sampling frame is reset when a new block is initiated. Both studies had a number of methodological strengths: they used actual random draws with no deception of participants, and participants made real-outcome bets on their predictions, such that exhibiting the GF was costly to subjects (yet they still showed it). Finally, the GF was operationalized as predicting or betting on an outcome as a function of run length of the opposing outcome, which revealed a nonlinear form of the GF. These results illuminate the nature of the GF and the decision processes underlying it as well as illustrate a method to eliminate this classic judgment bias.

February 20, 2017

Michał Godziszewski, University of Warsaw, Institute of Philosophy

Dutch Books and nonclassical probability spaces


We investigate how Dutch Book considerations can be conducted in the context of two classes of nonclassical probability spaces used in philosophy of physics. In particular we show that a recent proposal by B. Feintzeig to find so called “generalized probability spaces” which would not be susceptible to a Dutch Book and would not possess a classical extension is doomed to fail. Noting that the particular notion of a nonclassical probability space used by Feintzeig is not the most common employed in philosophy of physics, and that his usage of the “classical” Dutch Book concept is not appropriate in “nonclassical” contexts, we then argue that if we switch to the more frequently used formalism and use the correct notion of a Dutch Book, then all probability spaces are not susceptible to a Dutch Book. We also settle a hypothesis regarding the existence of classical extensions of a class of generalized probability spaces.

This is a joint work with Leszek Wroński (Jagiellonian University).

February 27, 2017

Hans Halvorson, Princeton University, Philosophy Department

Probability Ex Nihilo


In many mathematical settings, there is a sense in which we get probability "for free." I’ll consider some ways in which this notion "for free" can be made precise - and its connection (or lack thereof) to rational credences. As one specific application, I’ll consider the meaning of cosmological probabilities, i.e. probabilities over the space of possible universes.

March 6, 2017

Tamar Lando, Columbia University, Philosophy Department

Runaway Credences and the Principle of Indifference


The principle of indifference is a rule for rationally assigning precise degrees of confidence to possibilities among which we have no reason to discriminate. I argue that this principle, in combination with standard Bayesian conditionalization, has untenable consequences. In particu- lar, it allows agents to leverage their ignorance toward a position of very strong confidence vis-`a-vis propositions about which they know very little. I study the consequences for our response to puzzles about self-locating belief, where a restricted principle of indifference (together with Bayesian conditionalization) is widely endorsed.

March 20, 2017

Sandy Zabell, Northwestern University, Mathematics Department

Alan Turing and the Applications of Probability to Cryptography


In the years before World War II Bayesian statistics went into eclipse, a casualty of the combined attacks of statisticians such as R. A. Fisher and Jerzy Neyman. During the war itself, however, the brilliant but statistical naif Alan Turing developed de novo a Bayesian approach to cryptananalysis which he then applied to good effect against a number of German encryption systems. The year 2012 was the centenary of the birth of Alan Turing, and as part of the celebrations the British authorities released materials casting light on Turing's Bayesian approach. In this talk I discuss how Turing's Bayesian view of inductive inference was reflected in his approach to cryptanalysis, and give an example where his Bayesian methods proved more effective than the orthodox ones more commonly used. I will conclude by discussing the curious career of I. J. Good, initially one of Turing's assistants at Bletchley Park. Good became one of the most influential advocates for Bayesian statistics after the war, although he hid the reasons for his belief in their efficacy for many decades due to their classified origins.

March 27, 2017

Brad Weslake, New York University-Shanghai, Philosophy Department

Fitness and Variance


This paper is about the role of probability in evolutionary theory. I present some models of natural selection in populations with variance in reproductive success. The models have been taken by many to entail that the propensity theory of fitness is false. I argue that the models do not entail that fitness is not a propensity. Instead, I argue that the lesson of the models is that the fitness of a type is not grounded in the fitness of individuals of that type.

April 3, 2017

Peter Achistein, Johns Hopkins University, Philosophy Department

Epistemic Simplicity: The Last Refuge of a Scoundrel


Some of the greatest scientists, including Newton and Einstein, invoke simplicity in defense of a theory they promote. Newton does so in defense of his law of gravity, Einstein in defense of his general theory of relativity. Both claim that nature is simple, and that, because of this, simplicity is an epistemic virtue. I propose to ask what these claims mean and whether, and if so how, they can be supported. The title of the talk should tell you where I am headed.

April 10, 2017

Harry Crane, Rutgers University, Department of Statistics

Probabilities as Shapes


In mathematics, statistics, and perhaps even in our intuition, it is conventional to regard probabilities as numbers, but I prefer instead to think of them as shapes. I'll explain how and why I prefer to think of probabilities as shapes instead of numbers, and will discuss how these probability shapes can be formalized in terms of infinity groupoids (or homotopy types) from homotopy type theory (HoTT).

April 17, 2017

Dimitris Tsementzis, Rutgers University, Department of Statistics

Sample Structures


I will outline some difficult cases for the classical formalization of a sample space as a *set* of outcomes, and argue that some of these cases are better served by a formalization of a sample space as an appropriate *structure* of outcomes.

April 24, 2017

Miriam Schoenfield, University of Texas, Department of Philosophy

Beliefs Formed Arbitrarily


This paper addresses the concern of beliefs formed arbitrarily: for example, religious, political and moral beliefs that we realize we possess because of the social environments we grew up in. The paper motivates a set of criteria for determining when the fact that our beliefs were arbitrarily formed should motivate a revision. What matters, I will argue, is how precise or imprecise your probabilities are with respect to the matter in question.

May 1, 2017

Nicholas Teh, Notre Dame University, Philosophy Department

Probability, Inconsistency, and the Quantum


Various images of the inconsistency between (the empirical probabilities of) quantum theory and classical probability have been handed down to us by tradition. Of these, two of the most compelling are the "geometric" image of inconsistency implicit in Kochen-Specker arguments, and the "Dutch Book violation" image of inconsistency which is familiar to us from epistemology and the philosophy of rationality. In this talk, I will argue that there is a systematic and highly general relationship between the two images.