Koellner: Gödel’s Disjunction

Gödel’s Disjunction
Peter Koellner (Harvard University)
5:00 pm, Friday, April 7th, 2017
716 Philosophy Hall, Columbia University

Abstract. Gödel’s disjunction asserts that either “the mind cannot be mechanized” or “there are absolutely undecidable statements.” Arguments are examined for and against each disjunct in the context of precise frameworks governing the notions of absolute provability and truth. The focus is on Penrose’s new argument, which interestingly involves type-free truth. In order to reconstruct Penrose’s argument, a system, DKT, is devised for absolute provability and type-free truth. It turns out that in this setting there are actually two versions of the disjunction and its disjuncts. The first, fully general versions end up being (provably) indeterminate. The second, restricted versions end up being (provably) determinate, and so, in this case there is at least an initial prospect of success. However, in this case it will be seen that although the disjunction itself is provable, neither disjunct is provable nor refutable in the framework.

Workshop on Probability and Learning

Saturday, April 8th, 2017
716 Philosophy Hall, Columbia University

10:00 am – 11:30 am
Gordon Belot (University of Michigan)
Abstract. This talk falls into three short stories. The over-arching themes are: (i) that the notion of typicality is protean; (ii) that Bayesian technology is both more and less rigid than is sometimes thought.

11:45 am – 13:15 pm
Simon Huttegger (UC Irvine)
Schnorr Randomness and Lévi’s Martingale Convergence Theorem
Abstract. Much recent work in algorithmic randomness concerns characterizations of randomness in terms of the almost everywhere
behavior of suitably effectivized versions of functions from analysis or probability. In this talk, we take a look at Lévi’s Martingale Convergence Theorem from this perspective. Levi’s theorem is of fundamental importance to Bayesian epistemology. We note that much of Pathak, Rojas, and Simpson’s work on Schnorr randomness and the Lebesgue Differentiation Theorem in the Euclidean context carries over to Lévi’s Martingale Convergence Theorem in the Cantor space context. We discuss the methodological choices one faces in choosing the appropriate mode of effectivization and the potential bearing of these results on Schnorr’s critique of Martin-Löf. We also discuss the consequences of our result for the Bayesian model of learning.

13:15 pm – 2:45 pm

2:45 pm – 4:15 pm
Deborah Mayo (Virginia Tech)
Probing With Severity: Beyond Bayesian Probabilism and Frequentist Performance
Abstract. Getting beyond today’s most pressing controversies revolving around statistical methods and irreproducible findings requires scrutinizing underlying statistical philosophies. Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. A statistical inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of rejection and non-rejection, unwarranted links between statistical and substantive claims, and violations of model assumptions. I illustrate with some controversies surrounding the use of significance tests in the discovery of the Higgs particle in high energy physics.

4:30 pm – 6:00 pm
Teddy Seidenfeld (Carnegie Mellon University)
Radically Elementary Imprecise Probability Based on Extensive Measurement
Abstract. This presentation begins with motivation for “precise” non-standard probability. Using two old challenges — involving (i) symmetry of probabilistic relevance and (ii) respect for weak dominance — I contrast the following three approaches to conditional probability given a (non-empty) “null” event and their three associated decision theories.
Approach #1 – Full Conditional Probability Distributions (Dubins, 1975) conjoined with Expected Utility.
Approach #2 – Lexicographic Probability conjoined with Lexicographic Expected Value (e.g., Blume et al., 1991)
Approach #3 – Non-standard Probability and Expected Utility based on Non-Archimedean Extensive Measurement (Narens, 1974).
The second part of the presentation discusses progress we’ve made using Approach #3 within a context of Imprecise Probability.

Reception to follow

Suppes Lectures by Easwaran

by Kenny Easwaran (Texas A&M University)

Graduate Workshop
Measuring Beliefs
3:00-5:00 pm, Friday, March 31, 2017
716 Philosophy Hall, Columbia University

Departmental Lecture
An Opinionated Introduction to the Foundations of Bayesianism
4:10-6:00 pm, Tuesday, April 4, 2017
716 Philosophy Hall, Columbia University
Reception to follow in 720 Philosophy Hall

Public Lecture
Unity in Diversity: “The City as a Collective Agent”
4:10-6:00 pm, Thursday, April 6, 2017
603 Hamilton Hall, Columbia University

Parikh: An Epistemic Generalization of Rationalizability

An Epistemic Generalization of Rationalizability
Rohit Parikh (CUNY)
4:10 pm, Friday, March 24th, 2017
Faculty House, Columbia University

Abstract. Rationalizability, originally proposed by Bernheim and Pearce, generalizes the notion of Nash equilibrium. Nash equilibrium requires common knowledge of strategies. Rationalizability only requires common knowledge of rationality. However, their original notion assumes that the payoffs are common knowledge. I.e. agents do know what world they are in, but may be ignorant of what other agents are playing.

We generalize the original notion of rationalizability to consider situations where agents do not know what world they are in, or where some know but others do not know. Agents who know something about the world can take advantage of their superior knowledge. It may also happen that both Ann and Bob know about the world but Ann does not know that Bob knows. How might they act?

We will show how a notion of rationalizability in the context of partial knowledge, represented by a Kripke structure, can be developed.

Gaifman and Liu: A Simpler and More Realistic Subjective Decision Theory

Essential Simplifications of Savage’s Subjective Probabilities System
Haim Gaifman (Columbia University) and
Yang Liu (University of Cambridge)
4:10 pm, Friday, November 18th, 2016
Faculty House, Columbia University

Abstract. This talk covers: (I)  A short outline of Savage’s system; (II) A new mathematical technique for handling “partitions with errors” that leads to a simplification that Savage tried but did not succeed in getting, which leads to the definition of numerically precise probabilities without the σ-algebra assumption; (III) Some philosophical analysis of the notion of idealized rational agent, which is commonly used as a guideline for subjective probabilities.

Some acquaintance with Savage’s system is helpful, but (I) is added in order to make for a self-contained presentation. The talk is based on a joint work by the authors titled “A Simpler and More Realistic Subjective Decision Theory”. Please email Robby for an introductory section of the present draft of the paper.

More about the seminar here.

We will be having dinner right after the meeting at the faculty house. Please let Robby know if you will be joining us so that he can make an appropriate reservation (please be advised that at this point the university only agrees to cover the expenses of the speaker and the rapporteur and that the cost for all others is $30, payable by cash or check).

Price: Heart of DARCness

Heart of DARCness
Huw Price (University of Cambridge)
4:10 pm, Thursday, October 13th, 2016
Faculty House, Columbia University

Abstract. Alan Hajek has recently criticised the thesis that Deliberation Crowds Out Prediction (renaming it the DARC thesis, for ‘Deliberation Annihilates Reflective Credence’). Hajek’s paper has reinforced my sense that proponents and opponents of this thesis often talk past one other. To avoid confusions of this kind we need to dissect our way to the heart of DARCness, and to distinguish it from various claims for which it is liable to be mistaken. In this talk, based on joint work with Yang Liu, I do some of this anatomical work. Properly understood, I argue, the heart is in good shape, and untouched by Hajek’s jabs at surrounding tissue. Moreover, a feature that Hajek takes to be problem for the DARC thesis – that it commits us to widespread ‘credal gaps’ – turns out to be a common and benign feature of a broad class of cases, of which deliberation is easily seen to be one.

More about the seminar here.

We will be having dinner right after the meeting at the faculty house. Please let Robby know if you will be joining us so that he can make an appropriate reservation (please be advised that at this point the university only agrees to cover the expenses of the speaker and the rapporteur and that the cost for all others is $30, payable by cash or check).