Seminars by Term:

Sort by year:
September 12, 2016

Glenn Shafer, Rutgers University, Business School

Calibrate p-values by taking the square root

Abstract

For nearly 100 years, researchers have persisted in using p-values in spite of fierce criticism. Both Bayesians and Neyman-Pearson purists contend that use of a p-value is cheating even in the simplest case, where the hypothesis to be tested and a test statistic are specified in advance. Bayesians point out that a small p-value often does not translate into a strong Bayes factor against the hypothesis. Neyman-Pearson purists insist that you should state a significance level in advance and stick with it, even if the p-value turns out to be much smaller than this significance level. But many applied statisticians persist in feeling that a p-value much smaller than the significance level is meaningful evidence. In the game-theoretic approach to probability (see my 2001 book with Vladimir Vovk, described at www.probabilityandfinance.com, you test a statistical hypothesis by using its probabilities to bet. You reject at a significance level of 0.01, say, if you succeed in multiplying the capital you risk by 100. In this picture, we can calibrate small p-values so as to measure their meaningfulness while absolving them of cheating. There are various ways to implement this calibration, but one of them leads to a very simple rule of thumb: take the square root of the p-value. Thus rejection at a significance level of 0.01 requires a p-value of one in 10,000.

September 19, 2016

Isaac Wilhelm, Rutgers University, Philosophy Department

A statistical analysis of luck

Abstract

According to Pritchard's analysis of luck (PAL), an event is lucky just in case it fails to obtain in a sufficiently large class of sufficiently close possible worlds. Though there are several reasons to like the PAL, it faces at least two counterexamples. After reviewing those counterexamples, I introduce a new, statistical analysis of luck (SAL). The reasons to like the PAL are also reasons to like the SAL, but the SAL is not susceptible to the counterexamples.

September 26, 2016

Barry Loewer, Rutgers University, Philosophy Department

What probabilities there are and what probabilities are

Abstract

The sciences, especially fundamental physics, contain theories that posit objective probabilities. But what are objective probabilities?

Are they fundamental features of reality as mass or charge might be? or do more fundamental facts, for example frequencies, ground probabilities?

In my talk I will survey some views about what probabilities there are and what grounds them.

October 3, 2016

Michael Strevens, New York University, Philosophy Department

Dynamic Probabilities and Initial Conditions

Abstract

Dynamic approaches to understanding the foundations of physical probability in the non-fundamental sciences (from statistical physics through evolutionary biology and beyond) turn on special properties of physical processes that are apt to produce "probabilistically patterned" outcomes. I will introduce one particular dynamic approach of especially wide scope.

Then a problem: dynamic properties on their own are never quite sufficient to produce the observed patterns; in addition, some sort of probabilistic assumption about initial conditions must be made. What grounds the initial condition assumption? I discuss some possible answers.

October 10, 2016

Prakash Gorroochurn, Columbia University, Biostatistics Department

Fisher’s fiducial probability – a historical perspective

Abstract

Of R.A. Fisher's countless statistical innovations, fiducial probability is one of the very few that has found little favor among probabilists and statisticians. Fiducial probability is still misunderstood today and rarely mentioned in current textbooks. This presentation will attempt to offer a historical perspective on the topic, explaining Fisher's motivations and subsequent oppositions from his contemporaries. The talk is based on my newly released book "Classic Topics on the History of Modern Mathematical Statistics: From Laplace to More Recent Times."

October 17, 2016

Teddy Seidenfeld, Carnegie Mellon University, Philosophy Department

A modest proposal to use rates of incoherence as a guide for personal uncertainties about logic and mathematics

Abstract

It is an old and familiar challenge to normative theories of personal probability that they do not make room for non-trivial uncertainties about (the non-controversial parts of) logic and mathematics. Savage (1967) gives a frank presentation of the problem, noting that his own (1954) classic theory of rational preference serves as a poster-child for the challenge.

Here is the outline of this presentation:
     First is a review of the challenge.
     Second, I comment on two approaches that try to solve the challenge by making surgical adjustments to the canonical theory of coherent personal probability. One approach relaxes the Total Evidence Condition: see Good (1971). The other relaxes the closure conditions on a measure space: see Gaifman (2004). Hacking (1967) incorporates both of these approaches.
     Third, I summarize an account of rates of incoherence, explain how to model uncertainties about logical and mathematical questions with rates of incoherence, and outline how to use this approach in order to guide the uncertain agent in the use of, e.g., familiar numerical Monte Carlo methods in order to improve her/his credal state about such questions (2012).

Based on joint work with J.B.Kadane and M.J.Schervish

References:
Gaifman, H. (2004) Reasoning with Limited Resources and Assigning Probabilities to Arithmetic Statements. Synthese 140: 97-119.
Good, I.J. (1971) Twenty-seven Principles of Rationality. In Good Thinking, Minn. U. Press (1983): 15-19.
Hacking, I. (1967) Slightly More Realistic Personal Probability. Phil. Sci. 34: 311-325.
Savage, L.J. (1967) Difficulties in the Theory of Personal Probability. Phil. Sci. 34: 305-310.
Seidenfeld, T., Schervish, M.J., and Kadane, J.B. (2012) What kind of uncertainty is that? J.Phil. 109: 516-533.

October 24, 2016

Alan Hajek, Australian National University, School of Philosophy

Staying Regular?

Abstract

'Regularity' conditions provide bridges between possibility and probability. They have the form:

If X is possible, then the probability of X is positive (or equivalents). Especially interesting are the conditions we get when we understand 'possible' doxastically, and 'probability' subjectively. I characterize these senses of 'regularity' in terms of a certain internal harmony of an agent's probability space (omega, F, P). I distinguish three grades of probabilistic involvement. A set of possibilities may be recognized by such a probability space by being a subset of omega; by being an element of F; and by receiving positive probability from P. An agent's space is regular if these three grades collapse into one.

I review several arguments for regularity as a rationality norm. An agent could violate this norm in two ways: by assigning probability zero to some doxastic possibility, and by failing to assign probability altogether to some doxastic possibility. I argue for the rationality of each kind of violation.

Both kinds of violations of regularity have serious consequences for traditional Bayesian epistemology. I consider their ramifications for:

- conditional probability

- conditionalization

- probabilistic independence

- decision theory

October 31, 2016

Vladimir Vapnik, Facebook AI and Columbia University

Brute force and intelligent models of learning

Abstract

This talk is devoted to a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The talk describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer.

In this talk I also will discuss a general ideas in philosophical foundation of induction and generalization related to the Huber's concept of falsifiability and to holistic methods of inference.

November 7, 2016

Adam Elga, Princeton University, Philosophy Department

Fragmented decision theory

Abstract

Bayesian decision theory assumes that its subjects are perfectly coherent: logically omniscient and able to perfectly access their information. Since imperfect coherence is both rationally permissible and widespread, it is desirable to extend decision theory to accommodate incoherent subjects. New 'no-go' proofs show that the rational dispositions of an incoherent subject cannot in general be represented by a single assignment of numerical magnitudes to sentences (whether or not those magnitudes satisfy the probability axioms). Instead, we should attribute to each incoherent subject a whole family of probability functions, indexed to choice conditions. If, in addition, we impose a "local coherence" condition, we can make good on the thought that rationality requires respecting easy logical entailments but not hard ones. The result is an extension of decision theory that applies to incoherent or fragmented subjects, assimilates into decision theory the distinction between knowledge-that and knowledge-how, and applies to cases of "in-between belief".

This is joint work with Agustin Rayo (MIT).

November 14, 2016

Jamie Pietruska , Rutgers University, Department of History

"Old Probabilities" and "Cotton Guesses": Weather Forecasts, Agricultural Statistics, and Uncertainty in the Late-Nineteenth and Early-Twentieth-Century United States

Abstract

This talk, which is drawn from Looking Forward: Prediction and Uncertainty in Modern America (forthcoming, University of Chicago Press), will examine weather forecasting and cotton forecasting as forms of knowledge production that initially sought to conquer unpredictability but ultimately accepted uncertainty in modern economic life. It will focus on contests between government and commercial forecasters over who had the authority to predict the future and the ensuing epistemological debates over the value and meaning of forecasting itself. Intellectual historians and historians of science have conceptualized the late nineteenth century in terms of “the taming of chance” in the shift from positivism to probabilism, but, as this talk will demonstrate, Americans also grappled with predictive uncertainties in daily life during a time when they increasingly came to believe in but also question the predictability of the weather, the harvest, and the future.

November 21, 2016

Glenn Shafer , Rutgers University, Business School

Defensive forecasting

Abstract

In game-theoretic probability, Forecaster gives probabilities (or upper expectations) on each round of the game, and Skeptic tests these probabilities by betting, while Reality decides the outcomes. Can Forecaster pass Skeptic's tests?

As it turns out, Forecaster can defeat any particular strategy for Skeptic, provided only that each move prescribed by the strategy varies continuously with respect to Forecaster's previous move. Forecaster wants to defeat more than a single strategy for Skeptic; he wants to defeat simultaneously all the strategies Skeptic might use. But as we will see, Forecaster can often amalgamate the strategies he needs to defeat by averaging them, and then he can play against the average. This is called defensive forecasting. Defeating the average may be good enough, because when any one of the strategies rejects Forecaster's validity, the average will reject as well, albeit less strongly.

This result has implications for the meaning of probability. It reveals that the crucial step in placing an evidential question in a probabilistic framework is its placement in a sequence of questions. Once we have chosen the sequence, good sequential probabilities can be given, and the validation of these probabilities by experience signifies less than commonly thought.

References:
(1) Defensive forecasting, by Vladimir Vovk, Akimichi Takemura. and Glenn Shafer (Working Paper \#8 at http://www.probabilityandfinance.com/articles/08.pdf).
(2) Game-theoretic probability and its uses, especially defensive forecasting, by Glenn Shafer (Working Paper \#22 at http://www.probabilityandfinance.com/articles/22.pdf).

November 28, 2016

Elie Ayache, Ito 33

Writing the future

Abstract

Derivative valuation theory is based on the formalism of abstract probability theory and random variables. However, when it is made part of the pricing tool that the 'quant' (quantitative analyst) develops and that the option trader uses, it becomes a pricing technology. The latter exceeds the theory and the formalism. Indeed, the contingent payoff (defining the derivative) is no longer the unproblematic random variable that we used to synthesize by dynamic replication, or whose mathematical expectation we used merely to evaluate, but it becomes a contingent claim. By this distinction we mean that the contingent claim crucially becomes traded independently of its underlying asset, and that its price is no longer identified with the result of a valuation. On the contrary, it becomes a market given and will now be used as an input to the pricing models, inverting them (implied volatility and calibration). One must recognize a necessity, not an accident, in this breach of the formal framework, even read in it the definition of the market now including the derivative instrument. Indeed, the trading of derivatives is the primary purpose of their pricing technology, and not a subsidiary usage. The question then poses itself of a possible formalization of this augmented market, or more simply, of the market. To that purpose we introduce the key notion of writing.

December 5, 2016

Ben Levinstein, Rutgers University, Philosophy Department

Higher-order evidence, Accuracy, and Information Loss

Abstract

Higher-order evidence is evidence that you're handling information out of accord with epistemic norms. For instance, you may gain evidence that you're possibly drugged and can't think straight. A natural thought is that you respond by lowering your confidence that you got a complex calculation right. If so, HOE has a number of peculiar features. For instance, if you should take it into account, it leads to violations of Good's theorem and the norm to update by conditionalization. This motivates a number of philosophers to embrace the steadfast position: you shouldn't lower your confidence even though you have evidence you're drugged. I disagree. I argue that HOE is a kind of information-loss. This both explains its peculiar features and shows what's wrong with some recent steadfast arguments. Telling agents not to respond is like telling them never to forget anything.

December 12, 2016

Vladimir Vovk, University of London

Treatment of uncertainty in the foundations of probability

Abstract

Kolmogorov's measure-theoretic axioms of probability formalize the Knightean notion of risk. Classical statistics adds a degree of Knightean uncertainty, since there is no probability distribution on the parameters, but uncertainty and risk are clearly separated. Game-theoretic probability formalizes the picture in which both risk and uncertainty interfere at every moment. The fruitfulness of this picture will be demonstrated by open theories in science and the emergence of stochasticity and probability in finance.

January 23, 2017

Shelly Goldstein, Rutgers University, Mathematics Department

Probability in Quantum Mechanics (and Bohmian Mechanics)

Abstract

No abstract.

January 30, 2017

Alexander Stein, Brooklyn Law School

Behavioral Probability

Abstract

Throughout their long history, humans have worked hard to tame chance. They adapted to their uncertain physical and social environments by using the method of trial and error. This evolutionary process made humans reason about uncertain facts the way they do. Behavioral economists argue that humans’ natural selection of their prevalent mode of reasoning wasn’t wise. They censure this mode of reasoning for violating the canons of mathematical probability that a rational person must obey.

Based on the insights from probability theory and the philosophy of induction, I argue that a rational person need not apply mathematical probability in making decisions about individual causes and effects. Instead, she should be free to use common sense reasoning that generally aligns with causative probability. I also show that behavioral experiments uniformly miss their target when they ask reasoners to extract probability from information that combines causal evidence with statistical data. Because it is perfectly rational for a person focusing on a specific event to prefer causal evidence to general statistics, those experiments establish no deviations from rational reasoning. Those experiments are also flawed in that they do not separate the reasoners’ unreflective beliefs from rule-driven acceptances. The behavioral economists’ claim that people are probabilistically challenged consequently remains unproven.

Paper can be downloaded here.

February 6, 2017

Branden Fitelson, Northeastern University, Philosophy Department

Two Approaches to Belief Revision

Abstract

In this paper, we compare and contrast two methods for the qualitative revision of (viz., “full”) beliefs. The first (“Bayesian”) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second (“Logical”) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other -- especially in the special case where the agent’s belief sets are deductively cogent.

The latest draft can be downloaded: http://fitelson.org/tatbr.pdf

February 13, 2017

Gretchen Chapman, Rutgers University, Psychology Department

Empirical Experiments on the Gambler's Fallacy

Abstract

The gambler’s fallacy (GF) is a classic judgment bias where, when predicting events from an i.i.d. sequence, decision makers inflate the perceived likelihood of one outcome (e.g. red outcome from a roulette wheel spin) after a run of the opposing outcome (e.g., a streak of black outcomes). This phenomenon suggests that decision makers act as if the sampling is performed without replacement rather than with replacement. A series of empirical experiments support the idea that lay decision makers indeed have this type of underlying mental model. In an online experiment, MTurk participants drew marbles from an urn after receiving instructions that made clear that the marble draws were performed with vs. without replacement. The GF pattern appeared only under the without-replacement instructions. In two in-lab experiments, student participants predicted a series of roulette spins that were either grouped into blocks or ungrouped as one session. The GF pattern was manifest on most trials, but it was eliminated on the first trial of each block in the blocked condition. This bracketing result suggests that the sampling frame is reset when a new block is initiated. Both studies had a number of methodological strengths: they used actual random draws with no deception of participants, and participants made real-outcome bets on their predictions, such that exhibiting the GF was costly to subjects (yet they still showed it). Finally, the GF was operationalized as predicting or betting on an outcome as a function of run length of the opposing outcome, which revealed a nonlinear form of the GF. These results illuminate the nature of the GF and the decision processes underlying it as well as illustrate a method to eliminate this classic judgment bias.

February 20, 2017

Michał Godziszewski, University of Warsaw, Institute of Philosophy

Dutch Books and nonclassical probability spaces

Abstract

We investigate how Dutch Book considerations can be conducted in the context of two classes of nonclassical probability spaces used in philosophy of physics. In particular we show that a recent proposal by B. Feintzeig to find so called “generalized probability spaces” which would not be susceptible to a Dutch Book and would not possess a classical extension is doomed to fail. Noting that the particular notion of a nonclassical probability space used by Feintzeig is not the most common employed in philosophy of physics, and that his usage of the “classical” Dutch Book concept is not appropriate in “nonclassical” contexts, we then argue that if we switch to the more frequently used formalism and use the correct notion of a Dutch Book, then all probability spaces are not susceptible to a Dutch Book. We also settle a hypothesis regarding the existence of classical extensions of a class of generalized probability spaces.

This is a joint work with Leszek Wroński (Jagiellonian University).

February 27, 2017

Hans Halvorson, Princeton University, Philosophy Department

Probability Ex Nihilo

Abstract

In many mathematical settings, there is a sense in which we get probability "for free." I’ll consider some ways in which this notion "for free" can be made precise - and its connection (or lack thereof) to rational credences. As one specific application, I’ll consider the meaning of cosmological probabilities, i.e. probabilities over the space of possible universes.

March 6, 2017

Tamar Lando, Columbia University, Philosophy Department

Runaway Credences and the Principle of Indifference

Abstract

The principle of indifference is a rule for rationally assigning precise degrees of confidence to possibilities among which we have no reason to discriminate. I argue that this principle, in combination with standard Bayesian conditionalization, has untenable consequences. In particu- lar, it allows agents to leverage their ignorance toward a position of very strong confidence vis-`a-vis propositions about which they know very little. I study the consequences for our response to puzzles about self-locating belief, where a restricted principle of indifference (together with Bayesian conditionalization) is widely endorsed.

March 20, 2017

Sandy Zabell, Northwestern University, Mathematics Department

Alan Turing and the Applications of Probability to Cryptography

Abstract

In the years before World War II Bayesian statistics went into eclipse, a casualty of the combined attacks of statisticians such as R. A. Fisher and Jerzy Neyman. During the war itself, however, the brilliant but statistical naif Alan Turing developed de novo a Bayesian approach to cryptananalysis which he then applied to good effect against a number of German encryption systems. The year 2012 was the centenary of the birth of Alan Turing, and as part of the celebrations the British authorities released materials casting light on Turing's Bayesian approach. In this talk I discuss how Turing's Bayesian view of inductive inference was reflected in his approach to cryptanalysis, and give an example where his Bayesian methods proved more effective than the orthodox ones more commonly used. I will conclude by discussing the curious career of I. J. Good, initially one of Turing's assistants at Bletchley Park. Good became one of the most influential advocates for Bayesian statistics after the war, although he hid the reasons for his belief in their efficacy for many decades due to their classified origins.

March 27, 2017

Brad Weslake, New York University-Shanghai, Philosophy Department

Fitness and Variance

Abstract

This paper is about the role of probability in evolutionary theory. I present some models of natural selection in populations with variance in reproductive success. The models have been taken by many to entail that the propensity theory of fitness is false. I argue that the models do not entail that fitness is not a propensity. Instead, I argue that the lesson of the models is that the fitness of a type is not grounded in the fitness of individuals of that type.

April 3, 2017

Peter Achistein, Johns Hopkins University, Philosophy Department

Epistemic Simplicity: The Last Refuge of a Scoundrel

Abstract

Some of the greatest scientists, including Newton and Einstein, invoke simplicity in defense of a theory they promote. Newton does so in defense of his law of gravity, Einstein in defense of his general theory of relativity. Both claim that nature is simple, and that, because of this, simplicity is an epistemic virtue. I propose to ask what these claims mean and whether, and if so how, they can be supported. The title of the talk should tell you where I am headed.

April 10, 2017

Harry Crane, Rutgers University, Department of Statistics

Probabilities as Shapes

Abstract

In mathematics, statistics, and perhaps even in our intuition, it is conventional to regard probabilities as numbers, but I prefer instead to think of them as shapes. I'll explain how and why I prefer to think of probabilities as shapes instead of numbers, and will discuss how these probability shapes can be formalized in terms of infinity groupoids (or homotopy types) from homotopy type theory (HoTT).

April 17, 2017

Dimitris Tsementzis, Rutgers University, Department of Statistics

Sample Structures

Abstract

I will outline some difficult cases for the classical formalization of a sample space as a *set* of outcomes, and argue that some of these cases are better served by a formalization of a sample space as an appropriate *structure* of outcomes.

April 24, 2017

Miriam Schoenfield, University of Texas, Department of Philosophy

Beliefs Formed Arbitrarily

Abstract

This paper addresses the concern of beliefs formed arbitrarily: for example, religious, political and moral beliefs that we realize we possess because of the social environments we grew up in. The paper motivates a set of criteria for determining when the fact that our beliefs were arbitrarily formed should motivate a revision. What matters, I will argue, is how precise or imprecise your probabilities are with respect to the matter in question.

May 1, 2017

Nicholas Teh, Notre Dame University, Philosophy Department

Probability, Inconsistency, and the Quantum

Abstract

Various images of the inconsistency between (the empirical probabilities of) quantum theory and classical probability have been handed down to us by tradition. Of these, two of the most compelling are the "geometric" image of inconsistency implicit in Kochen-Specker arguments, and the "Dutch Book violation" image of inconsistency which is familiar to us from epistemology and the philosophy of rationality. In this talk, I will argue that there is a systematic and highly general relationship between the two images.