Diaconis: The Problem of Thinking Too Much

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
The Problem of Thinking Too Much

Persi Diaconis (Stanford University)
4:10 pm, Friday, September 16, 2016
Faculty House, Columbia University

Abstract. We all know the problem: you sit there, turning things over, and nothing gets done.  Indeed, there are examples where “quick and dirty,” throwing away information, dominate.  My examples wil be from Bayesian statistics and the mathematics of coin tossing, but I will try to survey some of the work in psychology, philosophy, and economics.

More about the seminar here.

We will be having dinner right after the meeting at the faculty house. Please let Robby know if you will be joining us so that he can make an appropriate reservation (please be advised that at this point the university only agrees to cover the expenses of the speaker and the rapporteur and that the cost for all others is $25, payable by cash or check).

 

Easwaran: A New Framework for Aggregating Utility

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
A New Framework for Aggregating Utility
Kenny Easwaran (Texas A&M University)
4:10 pm, Friday, March 11, 2016
Faculty House, Columbia University

Abstract. It is often assumed that a natural way to aggregate utility over multiple agents is by addition. When there are infinitely many agents, this leads to various problems. Vallentyne and Kagan approach this problem by providing a partial ordering over outcomes, rather than a numerical aggregate value. Bostrom and Arntzenius both argue that without a numerical value, it is difficult to integrate this aggregation into our best method for considering acts with risky outcomes: expected value.

My 2014 paper, “Decision Theory without Representation Theorems”, describes a project for evaluating risky acts that extends expected value to cases where it is infinite or undefined. The project of this paper is to extend this methodology in a way that deals with risk and aggregation across agents simultaneously, instead of giving priority to one or the other as Bostrom and Arntzenius require. The result is still merely a partial ordering, but since it already includes all considerations of risk and aggregation, there is no further need for particular numerical representations.

Floyd: Gödel on Russell

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
Gödel on Russell: Truth, Perception, and an Infinitary Version of the Multiple Relation Theory of Judgment
Juliet Floyd (Boston University)
4:10 pm, May 8, 2014
Faculty House, Columbia University

Hartmann : Learning Conditionals and the Problem of Old Evidence

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
Learning Conditionals and the Problem of Old Evidence
Stephan Hartmann (Ludwig Maximilians-Universität München)
4:10 pm, February 13, 2015
Faculty House, Columbia University

Abstract. The following are abstracts of two papers on which this talk is based.

The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.

Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.

Wheeler: The Rise and Fall of Accuracy-first Epistemology

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
The Rise and Fall of Accuracy-first Epistemology
Gregory Wheeler (Ludwig Maximilian University of Munich)
4:10 pm, October 31, 2014
Room 2, Faculty House, Columbia University

Abstract.  Accuracy-first epistemology aims to supply non-pragmatic justifications for a variety of epistemic norms. The contemporary basis for accuracy-first epistemology is Jim Joyce’s program to reinterpret de Finetti’s scoring-rule arguments in terms of a “purely epistemic” notion of “gradational accuracy.” On Joyce’s account, scoring rules are taken to measure the accuracy of an agent’s belief state with respect to the true state of the world, where accuracy is conceived to be a pure epistemic good. Joyce’s non-pragmatic vindication of probabilism, then, is an argument to the effect that a measure of gradational accuracy satisfies conditions that are close enough to those necessary to run a de Finetti style coherence argument. A number of philosophers, including Hannes Leitgeb and Richard Pettigrew, have embraced Joyce’s program whole hog. Leitgeb and Pettigrew, for instance, have argued that Joyce’s program is too lax, and they have proposed conditions that narrow down the class of admissible gradational accuracy functions, while Pettigrew and his collaborators have sought to extend the list of epistemic norms receiving an accuracy-first treatment, a program that he calls Epistemic Decision Theory.

In this talk I report on joint work with Conor Mayo-Wilson that challenges the core doctrine of Epistemic Decision Theory, namely the proposal to supply a purely non-pragmatic justification for anything resembling the Von Neumann and Morgenstern axioms for a numerical epistemic utility function. Indeed, we argue that none of the axioms necessary for Epistemic Decision Theory have a satisfactory non-pragmatic justification, and we point to reasons why to suspect that not all the axioms could be given a satisfactory non-pragmatic justification. Our argument, if sound, has consequences for recent discussions of “pragmatic encroachment”, too. For if pragmatic encroachment is a debate to do with whether there is a pragmatic component to the justification condition of knowledge, our arguments may be viewed to address the true belief condition of (fallibilist) accounts of knowledge.

Leitgeb: The Humean Thesis on Belief

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
The Humean Thesis on Belief
Hannes Leitgeb (Ludwig Maximilian University of Munich)
4:15 pm, May 2nd, 2014
716 Philosophy Hall, Columbia University

Abstract.  I am going to make precise, and assess, the following thesis on (all-or-nothing) belief and degrees of belief: It is rational to believe a proposition just in case it is rational to have a stably high degree of belief in it.I will start with some historical remarks, which are going to motivate calling this postulate the “Humean thesis on belief”. Once the thesis has been formulated in formal terms, it is possible to derive conclusions from it. Three of its consequences I will highlight in particular: doxastic logic; an instance of what is sometimes called the Lockean thesis on belief; and a simple qualitative decision theory.