Formal Philosophy

Logic at Columbia University

Category: Events

Cresto: Ungrounded Payoffs

by Robby

Eleonora Cresto (Instituto de Filosofía de la UBA, Universidad Torcuato Di Tella, UNTREF)
4:10 pm, Friday, May 4th, 2018
Faculty House, Columbia University

Abstract. I explore a game theoretic analysis of social interactions in which each agent’s well-being depends crucially on the well-being of another agent. As a result of this, payoffs are interdependent and cannot be fixed, and hence the overall assessment of particular courses of action becomes ungrounded. A paradigmatic example of this general phenomenon occurs when both players are ‘reflective altruists’, in a sense to be explained. I begin by making an analogy with semantic ungroundedness and semantic paradoxes, and then I show how to proceed in order to model such interactions successfully. I argue that we obtain a second order coordination game for subjective probabilities, in which agents try to settle on a single matrix. As we will see, the phenomenon highlights a number of interesting connections among the concepts of self-knowledge, common knowledge and common belief.

Icard: On the Rational Role of Randomization

by Robby

On the Rational Role of Randomization
Thomas Icard (Stanford)
4:10 pm, Friday, April 13th, 2018
Faculty House, Columbia University

Abstract. Randomized acts play a marginal role in traditional Bayesian decision theory, essentially only one of tie-breaking. Meanwhile, rationales for randomized decisions have been offered in a number of areas, including game theory, experimental design, and machine learning. A common and plausible way of accommodating some (but not all) of these ideas from a Bayesian perspective is by appeal to a decision maker’s bounded computational resources. Making this suggestion both precise and compelling is surprisingly difficult. We propose a distinction between interesting and uninteresting cases where randomization can help a decision maker, with the eventual aim of achieving a unified story about the rational role of randomization. The interesting cases, we claim, all arise from constraints on memory.


Workshop on Decision Theory and Epistemology

by Yang Liu

March 3, 2018, 9:30am
716 Philosophy Hall
Columbia University


Jennifer Carr (University of California, San Diego)
Ryan Doody (Hebrew University of Jerusalem)
Harvey Lederman (Princeton University)
Chris Meacham (University of Massachusetts, Amherst)


Melissa Fusco (Columbia University)

More information can be found here.

Schervish: Finitely-Additive Decision Theory

by Robby

Finitely-Additive Decision Theory
Mark Schervish (Carnegie Mellon)
4:10 pm, Friday, February 16th, 2018
Faculty House, Columbia University

Abstract. We examine general decision problems with loss functions that are bounded below. We allow the loss function to assume the value ∞. No other assumptions are made about the action space, the types of data available, the types of non-randomized decision rules allowed, or the parameter space. By allowing prior distributions and the randomizations in randomized rules to be finitely-additive, we find very general complete class and minimax theorems. Specifically, under the sole assumption that the loss function is bounded below, every decision problem has a minimal complete class and all admissible rules are Bayes rules. Also, every decision problem has a minimax rule and a least-favorable distribution and every minimax rule is Bayes with respect to the least-favorable distribution. Some special care is required to deal properly with infinite-valued risk functions and integrals taking infinite values.  This talk will focus on some examples and the major differences between finitely-additive and countably-additive decision theory.  This is joint work with Teddy Seidenfeld, Jay Kadane, and Rafael Stern.


Gaifman: The Price of Broadminded Probabilities and the Limitation of Science

by Robby

The Price of Broadminded Probabilities and the Limitation of Science

Haim Gaifman (Columbia University)
4:10 pm, Friday, December 8th, 2017
Faculty House, Columbia University

Abstract. A subjective probability function is broadminded to the extent that it assigns positive probabilities to conjectures that can be possibly true. Assigning to such a conjecture the value 0 amounts to a priori ruling out the possibility of confirming the conjecture to any extent by the growing evidence. A positive value leaves, in principle, the possibility of learning from the evidence. In general, broadmindedness is not an absolute notion, but a graded one, and there is a price for it: the more broadminded the probability, the more complicated it is, because it has to assign non-zero values to more complicated conjectures. The framework which is suggested in the old Gaifman-Snir paper is suitable for phrasing this claim in a precise way and proving it. The technique by which this claim is established is to assume a definable probability function, and to state within the same language a conjecture that can be possibly true, whose probability is 0.

The complexity of the conjecture depends on the complexity of the probability, i.e., the complexity of the formulas that are used in defining it. In the Gaifman-Snir paper we used the arithmetical hierarchy as a measure of complexity. It is possible however to establish similar results with respect to a more “down to earth” measures, defined in terms of the time that it takes to calculate the probabilities, with given precisions.

A claim of this form, for a rather simple setup, was first proven by Hilary Putnam in his paper ““Degree of Confirmation” and inductive logic”, which was published in the 1963 Schilpp volume dedicated to Carnap. The proof uses in a probabilistic context, a diagonalization technique, of the kind used in set theory and in computer science. In the talk I shall present Putnam’s argument and show how diagonalization can be applied in considerably richer setups.

The second part of the talk is rather speculative. I shall point out the possibility that there might be epistemic limitations to what human science can achieve, which are imposed by certain pragmatic factors ‒ such as the criterion of repeatable experiments. All of which would recommend a skeptic attitude.

Parikh: Formalizing the Umwelt

by Robby

Formalizing the Umwelt
Rohit Parikh (CUNY)
4:10 pm, Friday, December 1, 2017
Faculty House, Columbia University

Abstract. The umwelt is a notion invented by the Baltic-German biologist Jakob von Uexküll. It represents how a creature, an animal, a child or even an adult “sees” the world and is a precursor to the Wumpus world in contemporary AI literature. A fly is caught in a spider’s web because its vision is too coarse to see the fine threads of the web. Thus though the web is part of the world, it is not a part of the fly’s umwelt.   Similarly a tick will suck not only on blood but also on any warm liquid covered by a membrane. In the tick’s umwelt, the blood and the warm liquid are “the same”. We represent an umwelt as a homomorphic image of the real world in which the creature, whatever it might be, has some perceptions, some powers, and some preferences (utilities for convenience). Thus we can calculate the average utility of an umwelt and also the utilities of two creatures combining their umwelts into a symbiosis. A creature may also have a “theory” which is a map from sets of atomic sentences to sets of atomic sentences. Atomic sentences which are observed may allow the creature to infer other atomic sentences not observed. This weak but useful notion of theory bypasses some of Davidson’s objections to animals having beliefs.

Russell, S. J. and Norvig, P. (2002). Artificial Intelligence: a modern approach. (International Edition).
Von Uexküll, J., von Uexküll, M., and O’Neil, J. D. (2010). A Foray into the Worlds of Animals and Humans: with a theory of meaning. University of Minnesota Press.​