Formal Philosophy

Logic at Columbia University

Bjorndahl: Language Based Games

by Yang Liu

CUNY SEMINAR IN LOGIC AND GAMES
Language Based Games
Adam Bjorndahl (Carnegie Mellon University)
10:30 AM to 12:30 PM, Friday, March 20, 2015
Room 7395, CUNY Graduate Center

Abstract: We introduce a generalization of classical game theory wherein each player has a fixed “language of preference”: a player can prefer one state of the world to another if and only if they can describe the difference between the two in this language. The expressiveness of the language therefore plays a crucial role in determining the parameters of the game. By choosing appropriately rich languages, this framework can capture classical games as well as various generalizations thereof (e.g., psychological games, reference-dependent preferences, and Bayesian games). On the other hand, coarseness in the language—cases where there are fewer descriptions than there are actual differences to describe—offers insight into some long-standing puzzles of human decision-making.

The Allais paradox, for instance, can be resolved simply and intuitively using a language with coarse beliefs: that is, by assuming that probabilities are represented not on a continuum, but discretely, using finitely-many “levels” of likelihood (e.g., “no chance”, “slight chance”, “unlikely”, “likely”, etc.). Many standard solution concepts from classical game theory can be imported into the language-based framework by taking their epistemic characterizations as definitional. In this way, we obtain natural generalizations of Nash equilibrium, correlated equilibrium, and rationalizability. We show that there are language-based games that admit no Nash equilibria using a simple example where one player wishes to surprise her opponent. By contrast, the existence of rationalizable strategies can be proved under mild conditions. This is joint work with Joe Halpern and Rafael Pass.

Hartmann : Learning Conditionals and the Problem of Old Evidence

by Yang Liu

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
Learning Conditionals and the Problem of Old Evidence
Stephan Hartmann (Ludwig Maximilians-Universität München)
4:10 pm, February 13, 2015
Faculty House, Columbia University

Abstract. The following are abstracts of two papers on which this talk is based.

The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.

Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.

Gruszczyńsk: Methods of constructing points from regions of space

by Yang Liu

Rafał Gruszczyńsk (Nicolaus Copernicus University, Toruń) will give an informal, non-colloquium talk this Friday, Nov. 21, at 2pm, in the seminar room (Philosophy 716). The title of the talk is “Methods of constructing points from regions of space”. Everybody is invited. The talk should be of special interest to colleagues and students working in logic, ontology, the philosophy of mathematics, and the philosophy of space and time.

Wheeler: The Rise and Fall of Accuracy-first Epistemology

by Yang Liu

UNIVERSITY SEMINAR ON LOGIC, PROBABILITY, AND GAMES
The Rise and Fall of Accuracy-first Epistemology
Gregory Wheeler (Ludwig Maximilian University of Munich)
4:10 pm, October 31, 2014
Room 2, Faculty House, Columbia University

Abstract.  Accuracy-first epistemology aims to supply non-pragmatic justifications for a variety of epistemic norms. The contemporary basis for accuracy-first epistemology is Jim Joyce’s program to reinterpret de Finetti’s scoring-rule arguments in terms of a “purely epistemic” notion of “gradational accuracy.” On Joyce’s account, scoring rules are taken to measure the accuracy of an agent’s belief state with respect to the true state of the world, where accuracy is conceived to be a pure epistemic good. Joyce’s non-pragmatic vindication of probabilism, then, is an argument to the effect that a measure of gradational accuracy satisfies conditions that are close enough to those necessary to run a de Finetti style coherence argument. A number of philosophers, including Hannes Leitgeb and Richard Pettigrew, have embraced Joyce’s program whole hog. Leitgeb and Pettigrew, for instance, have argued that Joyce’s program is too lax, and they have proposed conditions that narrow down the class of admissible gradational accuracy functions, while Pettigrew and his collaborators have sought to extend the list of epistemic norms receiving an accuracy-first treatment, a program that he calls Epistemic Decision Theory.

In this talk I report on joint work with Conor Mayo-Wilson that challenges the core doctrine of Epistemic Decision Theory, namely the proposal to supply a purely non-pragmatic justification for anything resembling the Von Neumann and Morgenstern axioms for a numerical epistemic utility function. Indeed, we argue that none of the axioms necessary for Epistemic Decision Theory have a satisfactory non-pragmatic justification, and we point to reasons why to suspect that not all the axioms could be given a satisfactory non-pragmatic justification. Our argument, if sound, has consequences for recent discussions of “pragmatic encroachment”, too. For if pragmatic encroachment is a debate to do with whether there is a pragmatic component to the justification condition of knowledge, our arguments may be viewed to address the true belief condition of (fallibilist) accounts of knowledge.

Workshop on Pragmatics, Relevance and Game Theory

by Yang Liu

Workshop on Pragmatics, Relevance and Game Theory
CUNY Graduate Center, Rm. 9207
October 14 and 15, 2014

Preliminary list of speakers:
Deirdre Wilson (UCL)
Laurence Horn (Yale)
Kent Bach (SFSU)
Robyn Carston (UCL)
Ariel Rubinstein (NYU and Tel Aviv)

CUNY:
Michael Devitt
Stephen Neale
Rohit Parikh

Students:
Marilynn Johnson (CUNY)
Ignacio Ojea (Columbia)
Todd Stambaugh (CUNY)
Cagil Tasdemir (CUNY)

Program here.

Ramanujam: Reasoning in games that change during play

by Yang Liu

CUNY SEMINAR IN LOGIC, PROBABILITY, AND GAMES
Reasoning in games that change during play
R. Ramanujam (Institute of Mathematical Sciences, India)
4:00 – 6:00 PM, Friday, June 2, 2014
Room 4421, CUNY GC

Abstract. We consider large games, in which the number of players is so large that outcomes are determined not by strategy profiles, but by distributions. In the model we study, a society player monitors choice distributions and intervenes periodically, leading to game changes. Rationality of individual players and that of the society player are mutually interdependent in such games. We discuss stability issues, and mention applications to infrastructure problems.