Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Rapporteur:
Yang Liu (Columbia)
October, 2013
The Value of Ignorance and Objective Probabilities
Haim Gaifman (Columbia University)
2:00-4:00 PM, October 18th, 2013
Room 4419, CUNY GC
Abstract. There are many cases in which knowledge has negative value and a rational agent may be willing to pay for not being informed. Such cases can be classified into those which are essentially of the single-agent kind and those where the negative value of information derives from social interactions, the existence of certain institution, as well as from legal considerations. In the single-agent case the standard examples involve situations in which knowing has in itself a value, besides its instrumental cognitive value for achieving goals. But in certain puzzling examples knowing is still a cognitive instrument and yet it seems to be an obstacle. Some of these cases touch on foundational issues concerning the meaning of objective probabilities. Ellsberg’s paradox involves an example of this kind. I shall focus on some of these problems in the later part of the talk.
Knowledge is Power, and so is Communication
Rohit Parikh (CUNY)
2:00-4:00 PM, October 18th, 2013
Room 4419, CUNY GC
Abstract. The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict? When can B trust A not to lie about this matter?
November, 2013
Dynamic Logics of Evidence Based Beliefs
Eric Pacuit (University of Maryland)
4:15 – 6:15 PM, Friday, November 1, 2013
Room 4419, CUNY GC
Abstract. The intuitive notion of evidence has both semantic and syntactic features. In this talk, I introduce and motivate an evidence logic for an agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the ordering are then defined in terms of this evidence structure. The semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. I will give an overview of the main axiomatizations for different classes of models and discuss logics that describe the dynamics of changing evidence, and the resulting language extensions. I will also discuss some intriguing connections with logics of belief revision.
April, 2014
Causal Decision Theory and intrapersonal Nash equilibria
Arif Ahmed (University of Cambridge)
4:15 – 6:15 PM, April 4th, 2014
715 Philosophy Hall, Columbia University
Abstract. Most philosophers today prefer ‘Causal Decision Theory’ to Bayesian or other non-Causal Decision Theories. What explains this is the fact that in certain Newcomb-like cases, only Causal theories recommend an option on which you would have done better, whatever the state of the world had been. But if so, there are cases of sequential choice in which the same difficulty arises for Causal Decision Theory. Worse: under further light assumptions the Causal Theory faces a money pump in these cases.
It may be illuminating to consider rational sequential choice as an intrapersonal game between one’s stages, and if time permits I will do this. In that light the difficulty for Causal Decision Theory appears to be that it allows, but its non-causal rivals do not allow, for Nash equilibria in such games that are Pareto inefficient.
May, 2014
The Humean Thesis on Belief
Hannes Leitgeb (Ludwig Maximilian University of Munich)
4:15 – 6:15 PM, May 2nd, 2014
716 Philosophy Hall, Columbia University
Abstract. I am going to make precise, and assess, the following thesis on (all-or-nothing) belief and degrees of belief: It is rational to believe a proposition just in case it is rational to have a stably high degree of belief in it.I will start with some historical remarks, which are going to motivate calling this postulate the “Humean thesis on belief”. Once the thesis has been formulated in formal terms, it is possible to derive conclusions from it. Three of its consequences I will highlight in particular: doxastic logic; an instance of what is sometimes called the Lockean thesis on belief; and a simple qualitative decision theory.