Formal Philosophy

Logic at Columbia University

Workshop on Decision Theory and Epistemology

by Yang Liu

March 3, 2018, 9:30am
716 Philosophy Hall
Columbia University


Jennifer Carr (University of California, San Diego)
Ryan Doody (Hebrew University of Jerusalem)
Harvey Lederman (Princeton University)
Chris Meacham (University of Massachusetts, Amherst)


Melissa Fusco (Columbia University)

More information can be found here.

Schervish: Finitely-Additive Decision Theory

by Robby

Finitely-Additive Decision Theory
Mark Schervish (Carnegie Mellon)
4:10 pm, Friday, February 16th, 2018
Faculty House, Columbia University

Abstract. We examine general decision problems with loss functions that are bounded below. We allow the loss function to assume the value ∞. No other assumptions are made about the action space, the types of data available, the types of non-randomized decision rules allowed, or the parameter space. By allowing prior distributions and the randomizations in randomized rules to be finitely-additive, we find very general complete class and minimax theorems. Specifically, under the sole assumption that the loss function is bounded below, every decision problem has a minimal complete class and all admissible rules are Bayes rules. Also, every decision problem has a minimax rule and a least-favorable distribution and every minimax rule is Bayes with respect to the least-favorable distribution. Some special care is required to deal properly with infinite-valued risk functions and integrals taking infinite values.  This talk will focus on some examples and the major differences between finitely-additive and countably-additive decision theory.  This is joint work with Teddy Seidenfeld, Jay Kadane, and Rafael Stern.


Gaifman: The Price of Broadminded Probabilities and the Limitation of Science

by Robby

The Price of Broadminded Probabilities and the Limitation of Science

Haim Gaifman (Columbia University)
4:10 pm, Friday, December 8th, 2017
Faculty House, Columbia University

Abstract. A subjective probability function is broadminded to the extent that it assigns positive probabilities to conjectures that can be possibly true. Assigning to such a conjecture the value 0 amounts to a priori ruling out the possibility of confirming the conjecture to any extent by the growing evidence. A positive value leaves, in principle, the possibility of learning from the evidence. In general, broadmindedness is not an absolute notion, but a graded one, and there is a price for it: the more broadminded the probability, the more complicated it is, because it has to assign non-zero values to more complicated conjectures. The framework which is suggested in the old Gaifman-Snir paper is suitable for phrasing this claim in a precise way and proving it. The technique by which this claim is established is to assume a definable probability function, and to state within the same language a conjecture that can be possibly true, whose probability is 0.

The complexity of the conjecture depends on the complexity of the probability, i.e., the complexity of the formulas that are used in defining it. In the Gaifman-Snir paper we used the arithmetical hierarchy as a measure of complexity. It is possible however to establish similar results with respect to a more “down to earth” measures, defined in terms of the time that it takes to calculate the probabilities, with given precisions.

A claim of this form, for a rather simple setup, was first proven by Hilary Putnam in his paper ““Degree of Confirmation” and inductive logic”, which was published in the 1963 Schilpp volume dedicated to Carnap. The proof uses in a probabilistic context, a diagonalization technique, of the kind used in set theory and in computer science. In the talk I shall present Putnam’s argument and show how diagonalization can be applied in considerably richer setups.

The second part of the talk is rather speculative. I shall point out the possibility that there might be epistemic limitations to what human science can achieve, which are imposed by certain pragmatic factors ‒ such as the criterion of repeatable experiments. All of which would recommend a skeptic attitude.

Parikh: Formalizing the Umwelt

by Robby

Formalizing the Umwelt
Rohit Parikh (CUNY)
4:10 pm, Friday, December 1, 2017
Faculty House, Columbia University

Abstract. The umwelt is a notion invented by the Baltic-German biologist Jakob von Uexküll. It represents how a creature, an animal, a child or even an adult “sees” the world and is a precursor to the Wumpus world in contemporary AI literature. A fly is caught in a spider’s web because its vision is too coarse to see the fine threads of the web. Thus though the web is part of the world, it is not a part of the fly’s umwelt.   Similarly a tick will suck not only on blood but also on any warm liquid covered by a membrane. In the tick’s umwelt, the blood and the warm liquid are “the same”. We represent an umwelt as a homomorphic image of the real world in which the creature, whatever it might be, has some perceptions, some powers, and some preferences (utilities for convenience). Thus we can calculate the average utility of an umwelt and also the utilities of two creatures combining their umwelts into a symbiosis. A creature may also have a “theory” which is a map from sets of atomic sentences to sets of atomic sentences. Atomic sentences which are observed may allow the creature to infer other atomic sentences not observed. This weak but useful notion of theory bypasses some of Davidson’s objections to animals having beliefs.

Russell, S. J. and Norvig, P. (2002). Artificial Intelligence: a modern approach. (International Edition).
Von Uexküll, J., von Uexküll, M., and O’Neil, J. D. (2010). A Foray into the Worlds of Animals and Humans: with a theory of meaning. University of Minnesota Press.​


Synthese S.I. on Decision Theory and the Future of Artificial Intelligence

by Yang Liu

Guest Editors:
Stephan Hartmann (LMU Munich)
Yang Liu (University of Cambridge)
Huw Price (University of Cambridge)

There is increasing interest in the challenges of ensuring that the long-term development of artificial intelligence (AI) is safe and beneficial. Moreover, despite different perspectives, there is much common ground between mathematical and philosophical decision theory, on the one hand, and AI, on the other. The aim of the special issue is to explore links and joint research at the nexus between decision theory and AI, broadly construed.

We welcome submissions of individual papers covering topics in philosophy, artificial intelligence and cognitive science that involve decision making including, but not limited to, subjects on

  • causality
  • decision making with bounded resources
  • foundations of probability theory
  • philosophy of machine learning
  • philosophical and mathematical decision/game theory

Contributions must be original and not under review elsewhere. Although there is no prescribed word or page limit for submissions to Synthese, as a rule of thumb, papers typically tend to be between 15 and 30 printed pages (in the journal’s printed format). Submissions should also include a separate title page containing the contact details of the author(s), an abstract (150-250 words) and a list of 4-6 keywords. All papers will be subject to the journal’s standard double-blind peer-review.

Manuscripts should be submitted online through Editorial Manager: Please choose the appropriate article type for your submission by selecting “S.I. : DecTheory&FutOfAI” from the relevant drop down menu.

The deadline for submissions is February 15, 2018.
For further information about the special issue, please visit the website:

Vasudevan: Entropy and Insufficient Reason

by Robby

Entropy and Insufficient Reason
Anubav Vasudevan (University of Chicago)
4:10 pm, Friday, November 10th, 2017
Faculty House, Columbia University

Abstract. One well-known objection to the principle of maximum entropy is the so-called Judy Benjamin problem, first introduced by van Fraassen (1981). The problem turns on the apparently puzzling fact that, on the basis of information relating an event’s conditional probability, the maximum entropy distribution will almost always assign to the event conditionalized on a probability strictly less than that assigned to it by the uniform distribution. In this paper, I present an analysis of the Judy Benjamin problem that can help to make sense of this seemingly odd feature of maximum entropy inference. My analysis is based on the claim that, in applying the principle of maximum entropy, Judy Benjamin is not acting out of a concern to maximize uncertainty in the face of new evidence, but is rather exercising a certain brand of epistemic charity towards her informant. This charity takes the form of an assumption on the part of Judy Benjamin that her informant’s evidential report leaves out no relevant information. I will explain how this single assumption suffices to rationalize Judy Benjamin’s behavior. I will then explain how such a re-conceptualization of the motives underlying Judy Benjamin’s appeal to the principle of maximum entropy can further our understanding of the relationship between this principle and the principle of insufficient reason. I will conclude with a discussion of the foundational significance for probability theory of ergodic theorems (e.g., de Finetti’s theorem) describing the asymptotic behavior of measure preserving transformation groups. In particular, I will explain how these results, which serve as the basis of maximum entropy inference, can provide a unified conceptual framework in which to justify both a priori and a posteriori probabilistic reasoning.