Logic, Probability, and Games

The seminar is concerned with applying formal methods to fundamental issues, with an emphasis on probabilistic reasoning, decision theory and games. In this context “logic” is broadly interpreted as covering applications that involve formal representations. The topics of interest have been researched within a very broad spectrum of different disciplines, including philosophy, statistics, economics, and computer science. The seminar is intended to bring together scholars from different fields of research so as to illuminate problems of common interest from different perspectives. Throughout each academic year, our monthly meetings are regularly presented by the members of the seminar and distinguished guest speakers. In the spring of 2014, the seminar also became an integral part of the University Seminars at Columbia University .

Past speakers: Arif Ahmed, Kenny Easwaran, Perci Diaconis, Juliet Floyd, Branden Fitelson, Haim Gaifman, Stephan Hartmann, Daniel Kahneman, Edi Karni, Hannes Leitgeb, Christian List, Bud Mishra, Eric Pacuit, Rohit Parikh, Huw Price, Teddy Seidenfeld, Gregory Wheeler.

Archive:  2015 – 20162014 – 2015 | 2013 – 2014

2016 – 2017 Meetings


Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)

Rapporteur:
Robby Finley (Columbia)

 ***

March, 2017

An Epistemic Generalization of Rationalizability
Rohit Parikh (CUNY)
4:10 pm, Friday, March 24th, 2017
Faculty House, Columbia University

Abstract. Rationalizability, originally proposed by Bernheim and Pearce, generalizes the notion of Nash equilibrium. Nash equilibrium requires common knowledge of strategies. Rationalizability only requires common knowledge of rationality. However, their original notion assumes that the payoffs are common knowledge. I.e. agents do know what world they are in, but may be ignorant of what other agents are playing.

We generalize the original notion of rationalizability to consider situations where agents do not know what world they are in, or where some know but others do not know. Agents who know something about the world can take advantage of their superior knowledge. It may also happen that both Ann and Bob know about the world but Ann does not know that Bob knows. How might they act?

We will show how a notion of rationalizability in the context of partial knowledge, represented by a Kripke structure, can be developed.

November, 2016

Essential Simplifications of Savage’s Subjective Probabilities System
Haim Gaifman (Columbia University) and Yang Liu (University of Cambridge)
4:10 pm, Friday, November 18th, 2016
Faculty House, Columbia University

Abstract. I shall try to cover: (I)  A short outline of Savage’s system, (II) A new mathematical technique for handling “partitions with errors” that leads to a simplification  that Savage tried but did not succeed in getting. (III)Some philosophical analysis of  an idealized rational agent, which is commonly used as a guideline for subjective probabilities.

Some acquaintance with Savage’s system is helpful, but I added (I) in order to make for a self-contained presentation.
The talk is based on a joint work with Yang Liu. Please email Robby for an introductory section of the present draft of our paper.

October, 2016

Heart of DARCness
Huw Price (University of Cambridge)
4:10 pm, Thursday, October 13th, 2016
Faculty House, Columbia University

Abstract. Alan Hajek has recently criticised the thesis that Deliberation Crowds Out Prediction (renaming it the DARC thesis, for ‘Deliberation Annihilates Reflective Credence’). Hajek’s paper has reinforced my sense that proponents and opponents of this thesis often talk past one other. To avoid confusions of this kind we need to dissect our way to the heart of DARCness, and to distinguish it from various claims for which it is liable to be mistaken. In this talk, based on joint work with Yang Liu, I do some of this anatomical work. Properly understood, I argue, the heart is in good shape, and untouched by Hajek’s jabs at surrounding tissue. Moreover, a feature that Hajek takes to be problem for the DARC thesis – that it commits us to widespread ‘credal gaps’ – turns out to be a common and benign feature of a broad class of cases, of which deliberation is easily seen to be one.

September, 2016

The Problem of Thinking Too Much
Persi Diaconis (Stanford University)
4:10 pm, Friday, September 16, 2016
Faculty House, Columbia University

Abstract. We all know the problem: you sit there, turning things over, and nothing gets done.  Indeed, there are examples where “quick and dirty,” throwing away information, dominate.  My examples will be from Bayesian statistics and the mathematics of coin tossing, but I will try to survey some of the work in psychology, philosophy, and economics.

2015 – 2016 Meetings


Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)

Rapporteur:
Robby Finley (Columbia)

 ***

May, 2016

Reason-based choice and context-dependence: an explanatory framework
Christian List (London School of Economics)
4:10 pm, Friday, May 6th, 2016
Faculty House, Columbia University

Abstract. We introduce a “reason-based” framework for explaining and predicting individual choices. The key idea is that a decision-maker focuses on some but not all properties of the options and chooses an option whose “motivationally salient” properties he/she most prefers. Reason-based explanations can capture two kinds of context dependent choice: (i) the motivationally salient properties may vary across choice contexts, and (ii) they may include “context-related” properties, not just “intrinsic” properties of the options. Our framework allows us to explain boundedly rational and sophisticated choice behaviour. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts.

March, 2016

A New Framework for Aggregating Utility
Kenny Easwaran (Texas A&M University)
4:10 pm, Friday, March 11, 2016
Faculty House, Columbia University

Abstract. It is often assumed that a natural way to aggregate utility over multiple agents is by addition. When there are infinitely many agents, this leads to various problems. Vallentyne and Kagan approach this problem by providing a partial ordering over outcomes, rather than a numerical aggregate value. Bostrom and Arntzenius both argue that without a numerical value, it is difficult to integrate this aggregation into our best method for considering acts with risky outcomes: expected value.

My 2014 paper, “Decision Theory without Representation Theorems”, describes a project for evaluating risky acts that extends expected value to cases where it is infinite or undefined. The project of this paper is to extend this methodology in a way that deals with risk and aggregation across agents simultaneously, instead of giving priority to one or the other as Bostrom and Arntzenius require. The result is still merely a partial ordering, but since it already includes all considerations of risk and aggregation, there is no further need for particular numerical representations.

December, 2015

Two Approaches to Belief Revision
Branden Fitelson (Rutgers University)
4:10 pm, Friday, December 18, 2015
Faculty House, Columbia University

Abstract. In this paper, we compare and contrast two methods for revising qualitative (viz., “full”) beliefs. The first method is a (broadly) Bayesian one, which operates (in its most naive form) via conditionalization and the minimization of expected inaccuracy. The second method is the AGM approach to belief revision. Our aim here is to provide the most straightforward explanation of the ways in which these two methods agree and disagree with each other. Ultimately, we conclude that AGM may be seen as more epistemically risk-seeking (in a sense to be made precise in the talk) than EUT (from the Bayesian perspective).

This talk is based on a joint work with Ted Shear.

November, 2015

Creolizing the Web
Bud Mishra (Courant Institute, NYU)
4:10 pm, Friday, November 20, 2015
Faculty House, Columbia University

Abstract. This talk will focus on a set of game theoretic ideas with applications to Computer, Biological and Social Sciences. We will primarily rely on a realistic formulation of classical information-asymmetric signaling games, in a repeated form, while allowing the agents to dynamically vary their utility functions. We will also explore the design and creolization of a new natural language system (“InTuit”) specifically designed for the web.

The talk will build on our earlier experience in the areas of systems biology (evolutionary models), game theory, data science, model checking, causality analysis, cyber security, insider threat, virtualization and data markets.

September, 2015

Awareness of Unawareness: A Theory of Decision Making in the Face of Ignorance
Edi Karni (Johns Hopkins University)
4:10 pm, Friday, September 25, 2015
Faculty House, Columbia University

Abstract.  In the wake of growing awareness, decision makers anticipate that they might acquire knowledge that, in their current state of ignorance, is unimaginable. Supposedly, this anticipation manifests itself in the decision makers’ choice behavior. In this paper we model the anticipation of growing awareness, lay choice-based axiomatic foundations to subjective expected utility representation of beliefs about the likelihood of discovering unknown consequences, and assign utility to consequences that are not only unimaginable but may also be nonexistent. In so doing, we maintain the flavor of reverse Bayesianism of Karni and Vierø (2013, 2015).

2014 – 2015 Meetings


Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)

Rapporteur:
Yang Liu (Columbia)

 ***

May, 2015

Gödel on Russell: Truth, Perception, and an Infinitary Version of the Multiple Relation Theory of Judgment
Juliet Floyd (Boston University)
4:10 pm, May 8, 2014
Faculty House, Columbia University

February, 2015

Learning Conditionals and the Problem of Old Evidence
Stephan Hartmann (Ludwig Maximilian University of Munich)
4:10 pm, February 13, 2014
Faculty House, Columbia University

Abstract. The following are abstracts of two papers on which this talk is based.

The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.

Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.

December, 2014

Two lessons to remember from the Sleeping Beauty problem
Teddy Seidenfeld (Carnegie Mellon University)
4:10 pm, December 5, 2014
Faculty House, Columbia University

November, 2014

Conversation about Human Judgment and Decision-making
Daniel Kahneman (Princeton University)
4:10 pm, November 7, 2014
Faculty House, Columbia University

October, 2014

The Rise and Fall of Accuracy-first Epistemology
Gregory Wheeler (Ludwig Maximilian University of Munich)
4:10 pm, October 31, 2014
Faculty House, Columbia University

Abstract.  Accuracy-first epistemology aims to supply non-pragmatic justifications for a variety of epistemic norms. The contemporary basis for accuracy-first epistemology is Jim Joyce’s program to reinterpret de Finetti’s scoring-rule arguments in terms of a “purely epistemic” notion of “gradational accuracy.” On Joyce’s account, scoring rules are taken to measure the accuracy of an agent’s belief state with respect to the true state of the world, where accuracy is conceived to be a pure epistemic good. Joyce’s non-pragmatic vindication of probabilism, then, is an argument to the effect that a measure of gradational accuracy satisfies conditions that are close enough to those necessary to run a de Finetti style coherence argument. A number of philosophers, including Hannes Leitgeb and Richard Pettigrew, have embraced Joyce’s program whole hog. Leitgeb and Pettigrew, for instance, have argued that Joyce’s program is too lax, and they have proposed conditions that narrow down the class of admissible gradational accuracy functions, while Pettigrew and his collaborators have sought to extend the list of epistemic norms receiving an accuracy-first treatment, a program that he calls Epistemic Decision Theory.

In this talk I report on joint work with Conor Mayo-Wilson that challenges the core doctrine of Epistemic Decision Theory, namely the proposal to supply a purely non-pragmatic justification for anything resembling the Von Neumann and Morgenstern axioms for a numerical epistemic utility function. Indeed, we argue that none of the axioms necessary for Epistemic Decision Theory have a satisfactory non-pragmatic justification, and we point to reasons why to suspect that not all the axioms could be given a satisfactory non-pragmatic justification. Our argument, if sound, has consequences for recent discussions of “pragmatic encroachment”, too. For if pragmatic encroachment is a debate to do with whether there is a pragmatic component to the justification condition of knowledge, our arguments may be viewed to address the true belief condition of (fallibilist) accounts of knowledge.

2013 – 2014 Meetings


Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)

Rapporteur:
Yang Liu (Columbia)

 ***

May, 2014

The Humean Thesis on Belief
Hannes Leitgeb (Ludwig Maximilian University of Munich)
4:15 – 6:15 PM, May 2nd, 2014
716 Philosophy Hall, Columbia University

Abstract.  I am going to make precise, and assess, the following thesis on (all-or-nothing) belief and degrees of belief: It is rational to believe a proposition just in case it is rational to have a stably high degree of belief in it.I will start with some historical remarks, which are going to motivate calling this postulate the “Humean thesis on belief”. Once the thesis has been formulated in formal terms, it is possible to derive conclusions from it. Three of its consequences I will highlight in particular: doxastic logic; an instance of what is sometimes called the Lockean thesis on belief; and a simple qualitative decision theory.

April, 2014

Causal Decision Theory and intrapersonal Nash equilibria
Arif Ahmed (University of Cambridge)
4:15 – 6:15 PM, April 4th, 2014
716 Philosophy Hall, Columbia University

Abstract.  Most philosophers today prefer ‘Causal Decision Theory’ to Bayesian or other non-Causal Decision Theories. What explains this is the fact that in certain Newcomb-like cases, only Causal theories recommend an option on which you would have done better, whatever the state of the world had been. But if so, there are cases of sequential choice in which the same difficulty arises for Causal Decision Theory. Worse: under further light assumptions the Causal Theory faces a money pump in these cases.

It may be illuminating to consider rational sequential choice as an intrapersonal game between one’s stages, and if time permits I will do this. In that light the difficulty for Causal Decision Theory appears to be that it allows, but its non-causal rivals do not allow, for Nash equilibria in such games that are Pareto inefficient.

November, 2013

Dynamic Logics of Evidence Based Beliefs
Eric Pacuit (University of Maryland)
4:15 – 6:15 PM, Friday, November 1, 2013
Room 4419, CUNY GC

Abstract. The intuitive notion of evidence has both semantic and syntactic features. In this talk, I introduce and motivate an evidence logic for an  agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the ordering are then defined in terms of this evidence structure. The semantics invites a number of natural special cases, depending on how uniform we make the  evidence sets, and how coherent their total structure. I will give an overview of the main axiomatizations for different classes of models and discuss logics that describe the dynamics of changing evidence, and the resulting language extensions. I will also discuss some intriguing connections with logics of belief revision.

October, 2013

Knowledge is Power, and so is Communication
Rohit Parikh (CUNY)
2:00-4:00 PM, October 18th, 2013
Room 4419, CUNY GC

Abstract. The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict? When can B trust A not to lie about this matter?

The Value of Ignorance and Objective Probabilities
Haim Gaifman (Columbia University)
2:00-4:00 PM, October 18th, 2013
Room 4419, CUNY GC

Abstract. There are many cases in which knowledge has negative value and a rational agent may be willing to pay for not being informed. Such cases can be classified into those which are essentially of the single-agent kind and those where the negative value of information derives from social interactions, the existence of certain institution, as well as from legal considerations. In the single-agent case the standard examples  involve situations in which knowing has in itself a value, besides its instrumental cognitive value for achieving goals. But in certain puzzling examples  knowing is still a cognitive instrument and yet it seems to be an obstacle. Some of these cases touch on foundational issues concerning the meaning of objective probabilities. Ellsberg’s paradox involves an example of this kind. I shall focus on some of these problems in the later part of the talk.