The seminar is concerned with applying formal methods to fundamental issues, with an emphasis on probabilistic reasoning, decision theory and games. In this context “logic” is broadly interpreted as covering applications that involve formal representations. The topics of interest have been researched within a very broad spectrum of different disciplines, including philosophy, statistics, economics, and computer science. The seminar is intended to bring together scholars from different fields of research so as to illuminate problems of common interest from different perspectives. Throughout each academic year, our monthly meetings are regularly presented by the members of the seminar and distinguished guest speakers. In the spring of 2014, the seminar also became an integral part of the University Seminars at Columbia University .

**Past speakers: **Arif Ahmed, Tim Button, Eleonora Cresto, Kenny Easwaran, Perci Diaconis, Juliet Floyd, Branden Fitelson, Haim Gaifman, Thomas Icard, Peter Koellner, Stephan Hartmann, Daniel Kahneman, Edi Karni, Hannes Leitgeb, Christian List, Bud Mishra, Eric Pacuit, Rohit Parikh, Huw Price, Teddy Seidenfeld, Mark Schervish, Anubav Vasudevan, Gregory Wheeler.

**Archive:** 2017 – 2018 | 2016 – 2017 | 2015 – 2016 | 2014 – 2015 | 2013 – 2014

# 2017 – 2018 Meetings

**Co-Chairs:**

Haim Gaifman (Columbia)

Rohit Parikh (CUNY)

Yang Liu (Cambridge)

**Rapporteur:**

Robby Finley (Columbia)

### ***

## May, 2018

**Ungrounded Payoffs. A Tale of Unconditional Love and Unrepentant Hate
**Eleonora Cresto (Instituto de Filosofía de la UBA, Universidad Torcuato Di Tella, UNTREF)

4:10 pm, Friday, May 4th, 2018

Faculty House, Columbia University

*Abstract. *I explore a game theoretic analysis of social interactions in which each agent’s well-being depends crucially on the well-being of another agent. As a result of this, payoffs are interdependent and cannot be fixed, and hence the overall assessment of particular courses of action becomes ungrounded. A paradigmatic example of this general phenomenon occurs when both players are ‘reflective altruists’, in a sense to be explained. I begin by making an analogy with semantic ungroundedness and semantic paradoxes, and then I show how to proceed in order to model such interactions successfully. I argue that we obtain a second order coordination game for subjective probabilities, in which agents try to settle on a single matrix. As we will see, the phenomenon highlights a number of interesting connections among the concepts of self-knowledge, common knowledge and common belief.

## April, 2018

**On the Rational Role of Randomization
**Thomas Icard (Stanford)

4:10 pm, Friday, April 13th, 2018

Faculty House, Columbia University

*Abstract. *Randomized acts play a marginal role in traditional Bayesian decision theory, essentially only one of tie-breaking. Meanwhile, rationales for randomized decisions have been offered in a number of areas, including game theory, experimental design, and machine learning. A common and plausible way of accommodating some (but not all) of these ideas from a Bayesian perspective is by appeal to a decision maker’s bounded computational resources. Making this suggestion both precise and compelling is surprisingly difficult. We propose a distinction between interesting and uninteresting cases where randomization can help a decision maker, with the eventual aim of achieving a unified story about the rational role of randomization. The interesting cases, we claim, all arise from constraints on memory.

## February, 2018

**Finitely-Additive Decision Theory
**Mark Schervish (Carnegie Mellon)

4:10 pm, Friday, February 16th, 2018

Faculty House, Columbia University

*Abstract. *We examine general decision problems with loss functions that are bounded below. We allow the loss function to assume the value ∞. No other assumptions are made about the action space, the types of data available, the types of non-randomized decision rules allowed, or the parameter space. By allowing prior distributions and the randomizations in randomized rules to be finitely-additive, we find very general complete class and minimax theorems. Specifically, under the sole assumption that the loss function is bounded below, every decision problem has a minimal complete class and all admissible rules are Bayes rules. Also, every decision problem has a minimax rule and a least-favorable distribution and every minimax rule is Bayes with respect to the least-favorable distribution. Some special care is required to deal properly with infinite-valued risk functions and integrals taking infinite values. This talk will focus on some examples and the major differences between finitely-additive and countably-additive decision theory. This is joint work with Teddy Seidenfeld, Jay Kadane, and Rafael Stern.

## December, 2017

**The Price of Broadminded Probabilities and the Limitation of Science****
**Haim Gaifman (Columbia University)

4:10 pm, Friday, December 8th, 2017

Faculty House, Columbia University

*Abstract. *A subjective probability function is broadminded to the extent that it assigns positive probabilities to conjectures that can be possibly true. Assigning to such a conjecture the value 0 amounts to *a priori *ruling out the possibility of confirming the conjecture to any extent by the growing evidence. A positive value leaves, in principle, the possibility of learning from the evidence. In general, broadmindedness is not an absolute notion, but a graded one, and there is a price for it: the more broadminded the probability, the more complicated it is, because it has to assign non-zero values to more complicated conjectures. The framework which is suggested in the old Gaifman-Snir paper is suitable for phrasing this claim in a precise way and proving it. The technique by which this claim is established is to assume a definable probability function, and to state within the same language a conjecture that can be possibly true, whose probability is 0.

The complexity of the conjecture depends on the complexity of the probability, i.e., the complexity of the formulas that are used in defining it. In the Gaifman-Snir paper we used the arithmetical hierarchy as a measure of complexity. It is possible however to establish similar results with respect to a more “down to earth” measures, defined in terms of the time that it takes to calculate the probabilities, with given precisions.

A claim of this form, for a rather simple setup, was first proven by Hilary Putnam in his paper ““Degree of Confirmation” and inductive logic”, which was published in the 1963 Schilpp volume dedicated to Carnap. The proof uses in a probabilistic context, a diagonalization technique, of the kind used in set theory and in computer science. In the talk I shall present Putnam’s argument and show how diagonalization can be applied in considerably richer setups.

The second part of the talk is rather speculative. I shall point out the possibility that there might be epistemic limitations to what human science can achieve, which are imposed by certain pragmatic factors ‒ such as the criterion of repeatable experiments. All of which would recommend a skeptic attitude.

**Formalizing the Umwelt**

Rohit Parikh (CUNY)

4:10 pm, Friday, December 1, 2017

Faculty House, Columbia University

*Abstract. *The *umwelt* is a notion invented by the Baltic-German biologist Jakob von Uexküll. It represents how a creature, an animal, a child or even an adult “sees” the world and is a precursor to the Wumpus world in contemporary AI literature. A fly is caught in a spider’s web because its vision is too coarse to see the fine threads of the web. Thus though the web is part of the *world*, it is not a part of the fly’s *umwelt*. Similarly a tick will suck not only on blood but also on any warm liquid covered by a membrane. In the tick’s umwelt, the blood and the warm liquid are “the same”. We represent an umwelt as a homomorphic image of the real world in which the creature, whatever it might be, has some perceptions, some powers, and some preferences (utilities for convenience). Thus we can calculate the average utility of an umwelt and also the utilities of two creatures combining their umwelts into a symbiosis. A creature may also have a “theory” which is a map from sets of atomic sentences to sets of atomic sentences. Atomic sentences which are *observed *may allow the creature to infer other atomic sentences *not observed*. This weak but useful notion of theory bypasses some of Davidson’s objections to animals having beliefs.

## November, 2017

**Entropy and Insufficient Reason**

Anubav Vasudevan (University of Chicago)

4:10 pm, Friday, November 10, 2017

Faculty House, Columbia University

*Abstract. *One well-known objection to the principle of maximum entropy is the so-called Judy Benjamin problem, first introduced by van Fraassen (1981). The problem turns on the apparently puzzling fact that, on the basis of information relating an event’s conditional probability, the maximum entropy distribution will almost always assign to the event conditionalized on a probability strictly less than that assigned to it by the uniform distribution. In this paper, I present an analysis of the Judy Benjamin problem that can help to make sense of this seemingly odd feature of maximum entropy inference. My analysis is based on the claim that, in applying the principle of maximum entropy, Judy Benjamin is not acting out of a concern to maximize uncertainty in the face of new evidence, but is rather exercising a certain brand of epistemic charity towards her informant. This charity takes the form of an assumption on the part of Judy Benjamin that her informant’s evidential report leaves out no relevant information. I will explain how this single assumption suffices to rationalize Judy Benjamin’s behavior. I will then explain how such a re-conceptualization of the motives underlying Judy Benjamin’s appeal to the principle of maximum entropy can further our understanding of the relationship between this principle and the principle of insufficient reason. I will conclude with a discussion of the foundational significance for probability theory of ergodic theorems (e.g., de Finetti’s theorem) describing the asymptotic behavior of measure preserving transformation groups. In particular, I will explain how these results, which serve as the basis of maximum entropy inference, can provide a unified conceptual framework in which to justify both a priori and a posteriori probabilistic reasoning.

# 2016 – 2017 Meetings

**Co-Chairs:**

Haim Gaifman (Columbia)

Rohit Parikh (CUNY)

Yang Liu (Cambridge)

**Rapporteur:**

Robby Finley (Columbia)

### ***

## April, 2017

**Internal categoricity and internal realism in the philosophy of mathematics**

Tim Button (University of Cambridge)

4:10 pm, Wednesday, April 19, 2017

Faculty House, Columbia University

*Abstract. *Many philosophers think that mathematics is about ‘structure’. Many philosophers would also explicate this notion of ‘structure’ via model theory. But the Compactness and Löwenheim–Skolem theorems lead to some famously hard questions for this view. They threaten to leave us unable to talk about any particular ‘structure’.

In this talk, I outline how we might explicate ‘structure’ without appealing to model theory, and indeed without invoking any kind of semantic ascent. The approach involves making use of internal categoricity. I will outline the idea of internal categoricity, state some results, and use these results to make sense of Putnam’s beautiful but cryptic claim: “Models are not lost noumenal waifs looking for someone to name them; they are constructions within our theory itself, and they have names from birth.”

**Gödel’s Disjunction**

Peter Koellner (Harvard)

5 pm, Friday, April 7, 2017

Philosophy 716, Columbia University

*Abstract. *Gödel’s disjunction asserts that either “the mind cannot be mechanized” or “there are absolutely undecidable statements.” Arguments are examined for and against each disjunct in the context of precise frameworks governing the notions of absolute provability and truth. The focus is on Penrose’s new argument, which interestingly involves type-free truth. In order to reconstruct Penrose’s argument, a system, DKT, is devised for absolute provability and type-free truth. It turns out that in this setting there are actually two versions of the disjunction and its disjuncts. The first, fully general versions end up being (provably) indeterminate. The second, restricted versions end up being (provably) determinate, and so, in this case there is at least an initial prospect of success. However, in this case it will be seen that although the disjunction itself is provable, neither disjunct is provable nor refutable in the framework.

## March, 2017

**An Epistemic Generalization of Rationalizability**

Rohit Parikh (CUNY)

4:10 pm, Friday, March 24, 2017

Faculty House, Columbia University

*Abstract. *Rationalizability, originally proposed by Bernheim and Pearce, generalizes the notion of Nash equilibrium. Nash equilibrium requires common knowledge of strategies. Rationalizability only requires common knowledge of rationality. However, their original notion assumes that the payoffs are common knowledge. I.e. agents do know what world they are in, but may be ignorant of what other agents are playing.

We generalize the original notion of rationalizability to consider situations where agents do not know what world they are in, or where some know but others do not know. Agents who know something about the world can take advantage of their superior knowledge. It may also happen that both Ann and Bob know about the world but Ann does not know that Bob knows. How might they act?

We will show how a notion of rationalizability in the context of partial knowledge, represented by a Kripke structure, can be developed.

## November, 2016

**Essential Simplifications of Savage’s Subjective Probabilities System
**Haim Gaifman (Columbia University) and

Yang Liu (University of Cambridge)

4:10 pm, Friday, November 18, 2016

Faculty House, Columbia University

*Abstract. *I shall try to cover: (I) A short outline of Savage’s system, (II) A new mathematical technique for handling “partitions with errors” that leads to a simplification that Savage tried but did not succeed in getting. (III)Some philosophical analysis of an *idealized rational agent, *which is commonly used as a guideline for subjective probabilities.

Some acquaintance with Savage’s system is helpful, but I added (I) in order to make for a self-contained presentation.

The talk is based on a joint work with Yang Liu. Please email Robby for an introductory section of the present draft of our paper.

## October, 2016

**Heart of DARCness**

Huw Price (University of Cambridge)

4:10 pm, Thursday, October 13, 2016

Faculty House, Columbia University

*Abstract. *Alan Hajek has recently criticised the thesis that Deliberation Crowds Out Prediction (renaming it the DARC thesis, for ‘Deliberation Annihilates Reflective Credence’). Hajek’s paper has reinforced my sense that proponents and opponents of this thesis often talk past one other. To avoid confusions of this kind we need to dissect our way to the heart of DARCness, and to distinguish it from various claims for which it is liable to be mistaken. In this talk, based on joint work with Yang Liu, I do some of this anatomical work. Properly understood, I argue, the heart is in good shape, and untouched by Hajek’s jabs at surrounding tissue. Moreover, a feature that Hajek takes to be problem for the DARC thesis – that it commits us to widespread ‘credal gaps’ – turns out to be a common and benign feature of a broad class of cases, of which deliberation is easily seen to be one.

## September, 2016

**The Problem of Thinking Too Much**

Persi Diaconis (Stanford University)

4:10 pm, Friday, September 16, 2016

Faculty House, Columbia University

*Abstract. *We all know the problem: you sit there, turning things over, and nothing gets done. Indeed, there are examples where “quick and dirty,” throwing away information, dominate. My examples will be from Bayesian statistics and the mathematics of coin tossing, but I will try to survey some of the work in psychology, philosophy, and economics.

# 2015 – 2016 Meetings

**Co-Chairs:**

Haim Gaifman (Columbia)

Rohit Parikh (CUNY)

Yang Liu (Cambridge)

**Rapporteur:**

Robby Finley (Columbia)

### ***

## May, 2016

**Reason-based choice and context-dependence: an explanatory framework **

Christian List (London School of Economics)

4:10 pm, Friday, May 6, 2016

Faculty House, Columbia University

*Abstract. *We introduce a “reason-based” framework for explaining and predicting individual choices. The key idea is that a decision-maker focuses on some but not all properties of the options and chooses an option whose “motivationally salient” properties he/she most prefers. Reason-based explanations can capture two kinds of context dependent choice: (i) the motivationally salient properties may vary across choice contexts, and (ii) they may include “context-related” properties, not just “intrinsic” properties of the options. Our framework allows us to explain boundedly rational and sophisticated choice behaviour. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts.

## March, 2016

**A New Framework for Aggregating Utility**

Kenny Easwaran (Texas A&M University)

4:10 pm, Friday, March 11, 2016

Faculty House, Columbia University

*Abstract. *It is often assumed that a natural way to aggregate utility over multiple agents is by addition. When there are infinitely many agents, this leads to various problems. Vallentyne and Kagan approach this problem by providing a partial ordering over outcomes, rather than a numerical aggregate value. Bostrom and Arntzenius both argue that without a numerical value, it is difficult to integrate this aggregation into our best method for considering acts with risky outcomes: expected value.

My 2014 paper, “Decision Theory without Representation Theorems”, describes a project for evaluating risky acts that extends expected value to cases where it is infinite or undefined. The project of this paper is to extend this methodology in a way that deals with risk and aggregation across agents simultaneously, instead of giving priority to one or the other as Bostrom and Arntzenius require. The result is still merely a partial ordering, but since it already includes all considerations of risk and aggregation, there is no further need for particular numerical representations.

## December, 2015

**Two Approaches to Belief Revision**

Branden Fitelson (Rutgers University)

4:10 pm, Friday, December 18, 2015

Faculty House, Columbia University

*Abstract. *In this paper, we compare and contrast two methods for revising qualitative (viz., “full”) beliefs. The first method is a (broadly) Bayesian one, which operates (in its most naive form) via conditionalization and the minimization of expected inaccuracy. The second method is the AGM approach to belief revision. Our aim here is to provide the most straightforward explanation of the ways in which these two methods agree and disagree with each other. Ultimately, we conclude that AGM may be seen as more epistemically risk-seeking (in a sense to be made precise in the talk) than EUT (from the Bayesian perspective).

This talk is based on a joint work with Ted Shear.

## November, 2015

**Creolizing the Web**

Bud Mishra (Courant Institute, NYU)

4:10 pm, Friday, November 20, 2015

Faculty House, Columbia University

*Abstract. *This talk will focus on a set of game theoretic ideas with applications to Computer, Biological and Social Sciences. We will primarily rely on a realistic formulation of classical information-asymmetric signaling games, in a repeated form, while allowing the agents to dynamically vary their utility functions. We will also explore the design and creolization of a new natural language system (“InTuit”) specifically designed for the web.

The talk will build on our earlier experience in the areas of systems biology (evolutionary models), game theory, data science, model checking, causality analysis, cyber security, insider threat, virtualization and data markets.

## September, 2015

**Awareness of Unawareness: A Theory of Decision Making in the Face of Ignorance**

Edi Karni (Johns Hopkins University)

4:10 pm, Friday, September 25, 2015

Faculty House, Columbia University

*Abstract. *In the wake of growing awareness, decision makers anticipate that they might acquire knowledge that, in their current state of ignorance, is unimaginable. Supposedly, this anticipation manifests itself in the decision makers’ choice behavior. In this paper we model the anticipation of growing awareness, lay choice-based axiomatic foundations to subjective expected utility representation of beliefs about the likelihood of discovering unknown consequences, and assign utility to consequences that are not only unimaginable but may also be nonexistent. In so doing, we maintain the flavor of reverse Bayesianism of Karni and Vierø (2013, 2015).

# 2014 – 2015 Meetings

**Co-Chairs:**

Haim Gaifman (Columbia)

Rohit Parikh (CUNY)

**Rapporteur:**

Yang Liu (Columbia)

### ***

## May, 2015

**Gödel on Russell: Truth, Perception, and an Infinitary Version of the Multiple Relation Theory of Judgment**

Juliet Floyd (Boston University)

4:10 pm, May 8, 2014

Faculty House, Columbia University

## February, 2015

**Learning Conditionals and the Problem of Old Evidence**

Stephan Hartmann (Ludwig Maximilian University of Munich)

4:10 pm, February 13, 2014

Faculty House, Columbia University

*Abstract.* The following are abstracts of two papers on which this talk is based.

The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.

Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.

## December, 2014

**Two lessons to remember from the Sleeping Beauty problem**

Teddy Seidenfeld (Carnegie Mellon University)

4:10 pm, December 5, 2014

Faculty House, Columbia University

## November, 2014

**Conversation about Human Judgment and Decision-making**

Daniel Kahneman (Princeton University)

4:10 pm, November 7, 2014

Faculty House, Columbia University

## October, 2014

**The Rise and Fall of Accuracy-first Epistemology**

Gregory Wheeler (Ludwig Maximilian University of Munich)

4:10 pm, October 31, 2014

Faculty House, Columbia University

*Abstract. *Accuracy-first epistemology aims to supply non-pragmatic justifications for a variety of epistemic norms. The contemporary basis for accuracy-first epistemology is Jim Joyce’s program to reinterpret de Finetti’s scoring-rule arguments in terms of a “purely epistemic” notion of “gradational accuracy.” On Joyce’s account, scoring rules are taken to measure the accuracy of an agent’s belief state with respect to the true state of the world, where accuracy is conceived to be a pure epistemic good. Joyce’s non-pragmatic vindication of probabilism, then, is an argument to the effect that a measure of gradational accuracy satisfies conditions that are close enough to those necessary to run a de Finetti style coherence argument. A number of philosophers, including Hannes Leitgeb and Richard Pettigrew, have embraced Joyce’s program whole hog. Leitgeb and Pettigrew, for instance, have argued that Joyce’s program is too lax, and they have proposed conditions that narrow down the class of admissible gradational accuracy functions, while Pettigrew and his collaborators have sought to extend the list of epistemic norms receiving an accuracy-first treatment, a program that he calls Epistemic Decision Theory.

In this talk I report on joint work with Conor Mayo-Wilson that challenges the core doctrine of Epistemic Decision Theory, namely the proposal to supply a purely non-pragmatic justification for anything resembling the Von Neumann and Morgenstern axioms for a numerical epistemic utility function. Indeed, we argue that none of the axioms necessary for Epistemic Decision Theory have a satisfactory non-pragmatic justification, and we point to reasons why to suspect that not all the axioms could be given a satisfactory non-pragmatic justification. Our argument, if sound, has consequences for recent discussions of “pragmatic encroachment”, too. For if pragmatic encroachment is a debate to do with whether there is a pragmatic component to the justification condition of knowledge, our arguments may be viewed to address the true belief condition of (fallibilist) accounts of knowledge.

# 2013 – 2014 Meetings

**Co-Chairs:**

Haim Gaifman (Columbia)

Rohit Parikh (CUNY)

**Rapporteur:**

Yang Liu (Columbia)

### ***

## May, 2014

**The Humean Thesis on Belief**

Hannes Leitgeb (Ludwig Maximilian University of Munich)

4:15 – 6:15 PM, May 2, 2014

716 Philosophy Hall, Columbia University

*Abstract. *I am going to make precise, and assess, the following thesis on (all-or-nothing) belief and degrees of belief: It is rational to believe a proposition just in case it is rational to have a stably high degree of belief in it.I will start with some historical remarks, which are going to motivate calling this postulate the “Humean thesis on belief”. Once the thesis has been formulated in formal terms, it is possible to derive conclusions from it. Three of its consequences I will highlight in particular: doxastic logic; an instance of what is sometimes called the Lockean thesis on belief; and a simple qualitative decision theory.

## April, 2014

**Causal Decision Theory and intrapersonal Nash equilibria**

Arif Ahmed (University of Cambridge)

4:15 – 6:15 PM, April 4, 2014

716 Philosophy Hall, Columbia University

*Abstract. *Most philosophers today prefer ‘Causal Decision Theory’ to Bayesian or other non-Causal Decision Theories. What explains this is the fact that in certain Newcomb-like cases, only Causal theories recommend an option on which you would have done better, whatever the state of the world had been. But if so, there are cases of sequential choice in which the same difficulty arises for Causal Decision Theory. Worse: under further light assumptions the Causal Theory faces a money pump in these cases.

It may be illuminating to consider rational sequential choice as an intrapersonal game between one’s stages, and if time permits I will do this. In that light the difficulty for Causal Decision Theory appears to be that it allows, but its non-causal rivals do not allow, for Nash equilibria in such games that are Pareto inefficient.

## November, 2013

**Dynamic Logics of Evidence Based Beliefs**

Eric Pacuit (University of Maryland)

4:15 – 6:15 PM, Friday, November 1, 2013

Room 4419, CUNY GC

*Abstract. *The intuitive notion of evidence has both semantic and syntactic features. In this talk, I introduce and motivate an evidence logic for an agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the ordering are then defined in terms of this evidence structure. The semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. I will give an overview of the main axiomatizations for different classes of models and discuss logics that describe the dynamics of changing evidence, and the resulting language extensions. I will also discuss some intriguing connections with logics of belief revision.

## October, 2013

**Knowledge is Power, and so is Communication**

Rohit Parikh (CUNY)

2:00-4:00 PM, October 18, 2013

Room 4419, CUNY GC

*Abstract.* The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict? When can B trust A not to lie about this matter?

**The Value of Ignorance and Objective Probabilities**

Haim Gaifman (Columbia University)

2:00-4:00 PM, October 18, 2013

Room 4419, CUNY GC

*Abstract.* There are many cases in which knowledge has negative value and a rational agent may be willing to pay for not being informed. Such cases can be classified into those which are essentially of the single-agent kind and those where the negative value of information derives from social interactions, the existence of certain institution, as well as from legal considerations. In the single-agent case the standard examples involve situations in which knowing has in itself a value, besides its instrumental cognitive value for achieving goals. But in certain puzzling examples knowing is still a cognitive instrument and yet it seems to be an obstacle. Some of these cases touch on foundational issues concerning the meaning of objective probabilities. Ellsberg’s paradox involves an example of this kind. I shall focus on some of these problems in the later part of the talk.