The seminar is concerned with applying formal methods to fundamental issues, with an emphasis on probabilistic reasoning, decision theory and games. In this context “logic” is broadly interpreted as covering applications that involve formal representations. The topics of interest have been researched within a very broad spectrum of different disciplines, including philosophy, statistics, economics, and computer science. The seminar is intended to bring together scholars from different fields of research so as to illuminate problems of common interest from different perspectives. Throughout each academic year, our monthly meetings are regularly presented by the members of the seminar and distinguished guest speakers. In the spring of 2014, the seminar also became an integral part of the University Seminars at Columbia University.
Past speakers: Arif Ahmed, Jean Baccelli, Tim Button, Eleonora Cresto, Kenny Easwaran, Perci Diaconis, Juliet Floyd, Branden Fitelson, Haim Gaifman, Thomas Icard, Peter Koellner, Stephan Hartmann, Daniel Kahneman, Edi Karni, Hannes Leitgeb, Christian List, Bud Mishra, Michael Nielsen, Eric Pacuit, Rohit Parikh, Huw Price, Teddy Seidenfeld, Mark Schervish, Anubav Vasudevan, Gregory Wheeler.
Archive: 2017 – 2018 | 2016 – 2017 | 2015 – 2016 | 2014 – 2015 | 2013 – 2014
2018 – 2019 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)
Rapporteur:
Michael Nielsen (Columbia)
***
April 2019
Rethinking Convergence to the Truth
Simon Huttegger (UC Irvine)
4:10 pm, Friday, April 26th, 2019
Faculty House, Columbia University
Abstract. Convergence to the truth is viewed with some ambivalence in philosophy of science. On the one hand, methods of inquiry that lead to the truth in the limit are prized as marks of scientific rationality. But an agent who, by using some method, expects to always converge to the truth seems to fail a minimum standard of epistemic modesty. This point was recently brought home by Gordon Belot in his critique of Bayesian epistemology. In this paper I will study convergence to the truth theorems within the framework of Edward Nelson’s radically elementary probability theory. This theory provides an enriched conceptual framework for investigating convergence and gives rise to an appropriately modest form of Bayesianism.
From Hölder to Hahn: Comparative, Classificatory, and Quantitative Conceptions in Probability and Decision
Arthur Paul Pedersen (LMU Munich)
4:10 pm, Friday, April 12th, 2019
Faculty House, Columbia University
Abstract. This talk is a contribution to measurement theory with a special focus on the foundations of probability and decision theory. Its primary goals are two-fold. One goal is to introduce an extension of a mathematical result itself generalizing measurement theory’s cornerstone, Hölder’s Theorem, which assures that any Archimedean totally ordered group is isomorphic to a subgroup of the additive group of real numbers. This extension, Hahn’s Embedding Theorem, drops the Archimedean requirement to obtain an isomorphism to a subgroup of a Hahn lexical vector space. The mathematical result I report in this talk establishes an isomorphism to an explicit, commensurate ordered field extension of the real numbers—numbers therein expressed in terms of formal power series in a single infinitesimal, with familiar additive and multiplicative operations equipped with a lexicographic order—measurement theory’s answer to admitting non-Archimedean systems without recourse to routine techniques from non-standard analysis.
The second goal of this talk is to explain what motivates all this math, turning to measurement theory’s paradigmatic applications to introduce this talk’s central result. Following a review of objectionable prohibitions and permissions attributed to standard probability and decision theories, such as conditions of dominance, countable additivity, continuity, and completeness, the second mathematical result I report establishes, as a corollary to the first, that reasonably regimented preferences are representable by subjective expected utility permitting non-Archimedean-valued probability and utility, as needed—and likewise for regimented comparative probability systems as well as for a faithful recasting of de Finetti’s coherent previsions. These corollaries, already free of typical structural requirements, extend to admit indeterminacy and incompleteness.
March 2019
The Epistemology of Nondeterminism
Adam Bjorndahl (Carnegie Mellon)
4:10 pm, Friday, March 29th, 2019
Faculty House, Columbia University
Abstract. Propositional dynamic logic (PDL) is a framework for reasoning about nondeterministic program executions (or, more generally, nondeterministic actions). In this setting, nondeterminism is taken as a primitive: a program is nondeterministic iff it has multiple possible outcomes. But what is the sense of “possibility” at play here? This talk explores an epistemic interpretation: working in an enriched logical setting, we represent nondeterminism as a relationship between a program and an agent deriving from the agent’s (in)ability to adequately measure the dynamics of the program execution. More precisely, using topology to capture the observational powers of an agent, we define the nondeterministic outcomes of a given program execution to be those outcomes that the agent is unable to rule out in advance. In this framework, determinism coincides exactly with continuity: that is, determinism is continuity in the observation topology. This allows us to embed PDL into (dynamic) topological (subset space) logic, laying the groundwork for a deeper investigation into the epistemology (and topology) of nondeterminism.
February 2019
Buddha versus Popper: Do we live in the present or do we plan for the future?
Rohit Parikh (CUNY).
4:10 pm, Friday, February 22, 2019
Faculty House, Columbia University
Abstract. There are two approaches to life. The first one, which we are identifying with Sir Karl Popper, is to think before we act and to let our hypotheses die in our stead when the overall outcome is likely to be negative. We act now for a better future, and we think now which action will bring the best future. Both decision theory and backward induction are technical versions of this train of thought. The second approach, which we will identify with the Buddha, is to live in the present and not allow the future to pull us away from living in the ever present Now. The Buddha’s approach is echoed in many others who came after him, Jelaluddin Rumi, Kahlil Gibran, and even perhaps Jesus. It occurs in many contemporary teachers like Eckhart Tolle and Thich Nhat Hanh. We may call Popper’s approach “futurism” and the Buddha’s approach “presentism.”
In this talk, we will discuss various aspects of the discourse on presentism and futurism. The purpose is to contrast one with the other. We will not attempt to side with one against the other, and instead leave it as a future project to find a prescriptive action-guiding choice between the two. We merely conjecture that a better optimal choice between these two positions may be somewhere in between. (This is joint work with Jongjin Kim.)
December 2018
Actual Causality: A Survey
Joseph Halpern (Cornell University).
4:10 pm, Friday, December 7, 2018
Faculty House, Columbia University
Abstract. What does it mean that an event C “actually caused” event E? The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. (What exactly was the actual cause of the car accident or the medical problem?) The philosophy literature has been struggling with the problem of defining causality since the days of Hume, in the 1700s. Many of the definitions have been couched in terms of counterfactuals. (C is a cause of E if, had C not happened, then E would not have happened.) In 2001, Judea Pearl and I introduced a new definition of actual cause, using Pearl’s notion of structural equations to model counterfactuals. The definition has been revised twice since then, extended to deal with notions like “responsibility” and “blame”, and applied in databases and program verification. I survey the last 15 years of work here, including joint work with Judea Pearl, Hana Chockler, and Chris Hitchcock. The talk will be completely self-contained.
November 2018
Speed-optimal Induction and Dynamic Coherence
Michael Nielsen (Columbia University).
4:10 pm, Friday, November 16th, 2018
Faculty House, Columbia University
Abstract. A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run. This is joint work with Eric Wofsey.
September 2018
The Problem of State-Dependent Utility: A Reappraisal
Jean Baccelli (Munich Center for Mathematical Philosophy).
4:10 pm, Friday, September 28th, 2018
Faculty House, Columbia University
Abstract. State-dependent utility is a problem for decision theory under uncertainty. It questions the very possibility that beliefs be revealed by choice data. According to the current literature, all models of beliefs are equally exposed to the problem. Moreover, the problem is solvable only when the decision-maker can influence the resolution of uncertainty. This paper shows that these two views must be abandoned. The various models of beliefs are unequally exposed to the problem of state-dependent utility. The problem is solvable even when the decision-maker has no influence over the resolution of uncertainty. The implications of such reappraisal for a philosophical appreciation of the revealed preference methodology are discussed.
2017 – 2018 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)
Rapporteur:
Robby Finley (Columbia)
***
May 2018
Ungrounded Payoffs. A Tale of Unconditional Love and Unrepentant Hate
Eleonora Cresto (Instituto de Filosofía de la UBA, Universidad Torcuato Di Tella, UNTREF)
4:10 pm, Friday, May 4th, 2018
Faculty House, Columbia University
Abstract. I explore a game theoretic analysis of social interactions in which each agent’s well-being depends crucially on the well-being of another agent. As a result of this, payoffs are interdependent and cannot be fixed, and hence the overall assessment of particular courses of action becomes ungrounded. A paradigmatic example of this general phenomenon occurs when both players are ‘reflective altruists’, in a sense to be explained. I begin by making an analogy with semantic ungroundedness and semantic paradoxes, and then I show how to proceed in order to model such interactions successfully. I argue that we obtain a second order coordination game for subjective probabilities, in which agents try to settle on a single matrix. As we will see, the phenomenon highlights a number of interesting connections among the concepts of self-knowledge, common knowledge and common belief.
April 2018
On the Rational Role of Randomization
Thomas Icard (Stanford)
4:10 pm, Friday, April 13th, 2018
Faculty House, Columbia University
Abstract. Randomized acts play a marginal role in traditional Bayesian decision theory, essentially only one of tie-breaking. Meanwhile, rationales for randomized decisions have been offered in a number of areas, including game theory, experimental design, and machine learning. A common and plausible way of accommodating some (but not all) of these ideas from a Bayesian perspective is by appeal to a decision maker’s bounded computational resources. Making this suggestion both precise and compelling is surprisingly difficult. We propose a distinction between interesting and uninteresting cases where randomization can help a decision maker, with the eventual aim of achieving a unified story about the rational role of randomization. The interesting cases, we claim, all arise from constraints on memory.
February 2018
Finitely-Additive Decision Theory
Mark Schervish (Carnegie Mellon)
4:10 pm, Friday, February 16th, 2018
Faculty House, Columbia University
Abstract. We examine general decision problems with loss functions that are bounded below. We allow the loss function to assume the value ∞. No other assumptions are made about the action space, the types of data available, the types of non-randomized decision rules allowed, or the parameter space. By allowing prior distributions and the randomizations in randomized rules to be finitely-additive, we find very general complete class and minimax theorems. Specifically, under the sole assumption that the loss function is bounded below, every decision problem has a minimal complete class and all admissible rules are Bayes rules. Also, every decision problem has a minimax rule and a least-favorable distribution and every minimax rule is Bayes with respect to the least-favorable distribution. Some special care is required to deal properly with infinite-valued risk functions and integrals taking infinite values. This talk will focus on some examples and the major differences between finitely-additive and countably-additive decision theory. This is joint work with Teddy Seidenfeld, Jay Kadane, and Rafael Stern.
December 2017
The Price of Broadminded Probabilities and the Limitation of Science
Haim Gaifman (Columbia University)
4:10 pm, Friday, December 8th, 2017
Faculty House, Columbia University
Abstract. A subjective probability function is broadminded to the extent that it assigns positive probabilities to conjectures that can be possibly true. Assigning to such a conjecture the value 0 amounts to a priori ruling out the possibility of confirming the conjecture to any extent by the growing evidence. A positive value leaves, in principle, the possibility of learning from the evidence. In general, broadmindedness is not an absolute notion, but a graded one, and there is a price for it: the more broadminded the probability, the more complicated it is, because it has to assign non-zero values to more complicated conjectures. The framework which is suggested in the old Gaifman-Snir paper is suitable for phrasing this claim in a precise way and proving it. The technique by which this claim is established is to assume a definable probability function, and to state within the same language a conjecture that can be possibly true, whose probability is 0.
The complexity of the conjecture depends on the complexity of the probability, i.e., the complexity of the formulas that are used in defining it. In the Gaifman-Snir paper we used the arithmetical hierarchy as a measure of complexity. It is possible however to establish similar results with respect to a more “down to earth” measures, defined in terms of the time that it takes to calculate the probabilities, with given precisions.
A claim of this form, for a rather simple setup, was first proven by Hilary Putnam in his paper ““Degree of Confirmation” and inductive logic”, which was published in the 1963 Schilpp volume dedicated to Carnap. The proof uses in a probabilistic context, a diagonalization technique, of the kind used in set theory and in computer science. In the talk I shall present Putnam’s argument and show how diagonalization can be applied in considerably richer setups.
The second part of the talk is rather speculative. I shall point out the possibility that there might be epistemic limitations to what human science can achieve, which are imposed by certain pragmatic factors ‒ such as the criterion of repeatable experiments. All of which would recommend a skeptic attitude.
Formalizing the Umwelt
Rohit Parikh (CUNY)
4:10 pm, Friday, December 1, 2017
Faculty House, Columbia University
Abstract. The umwelt is a notion invented by the Baltic-German biologist Jakob von Uexküll. It represents how a creature, an animal, a child or even an adult “sees” the world and is a precursor to the Wumpus world in contemporary AI literature. A fly is caught in a spider’s web because its vision is too coarse to see the fine threads of the web. Thus though the web is part of the world, it is not a part of the fly’s umwelt. Similarly a tick will suck not only on blood but also on any warm liquid covered by a membrane. In the tick’s umwelt, the blood and the warm liquid are “the same”. We represent an umwelt as a homomorphic image of the real world in which the creature, whatever it might be, has some perceptions, some powers, and some preferences (utilities for convenience). Thus we can calculate the average utility of an umwelt and also the utilities of two creatures combining their umwelts into a symbiosis. A creature may also have a “theory” which is a map from sets of atomic sentences to sets of atomic sentences. Atomic sentences which are observed may allow the creature to infer other atomic sentences not observed. This weak but useful notion of theory bypasses some of Davidson’s objections to animals having beliefs.
November 2017
Entropy and Insufficient Reason
Anubav Vasudevan (University of Chicago)
4:10 pm, Friday, November 10, 2017
Faculty House, Columbia University
Abstract. One well-known objection to the principle of maximum entropy is the so-called Judy Benjamin problem, first introduced by van Fraassen (1981). The problem turns on the apparently puzzling fact that, on the basis of information relating an event’s conditional probability, the maximum entropy distribution will almost always assign to the event conditionalized on a probability strictly less than that assigned to it by the uniform distribution. In this paper, I present an analysis of the Judy Benjamin problem that can help to make sense of this seemingly odd feature of maximum entropy inference. My analysis is based on the claim that, in applying the principle of maximum entropy, Judy Benjamin is not acting out of a concern to maximize uncertainty in the face of new evidence, but is rather exercising a certain brand of epistemic charity towards her informant. This charity takes the form of an assumption on the part of Judy Benjamin that her informant’s evidential report leaves out no relevant information. I will explain how this single assumption suffices to rationalize Judy Benjamin’s behavior. I will then explain how such a re-conceptualization of the motives underlying Judy Benjamin’s appeal to the principle of maximum entropy can further our understanding of the relationship between this principle and the principle of insufficient reason. I will conclude with a discussion of the foundational significance for probability theory of ergodic theorems (e.g., de Finetti’s theorem) describing the asymptotic behavior of measure preserving transformation groups. In particular, I will explain how these results, which serve as the basis of maximum entropy inference, can provide a unified conceptual framework in which to justify both a priori and a posteriori probabilistic reasoning.
2016 – 2017 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)
Rapporteur:
Robby Finley (Columbia)
***
April 2017
Internal categoricity and internal realism in the philosophy of mathematics
Tim Button (University of Cambridge)
4:10 pm, Wednesday, April 19, 2017
Faculty House, Columbia University
Abstract. Many philosophers think that mathematics is about ‘structure’. Many philosophers would also explicate this notion of ‘structure’ via model theory. But the Compactness and Löwenheim–Skolem theorems lead to some famously hard questions for this view. They threaten to leave us unable to talk about any particular ‘structure’.
In this talk, I outline how we might explicate ‘structure’ without appealing to model theory, and indeed without invoking any kind of semantic ascent. The approach involves making use of internal categoricity. I will outline the idea of internal categoricity, state some results, and use these results to make sense of Putnam’s beautiful but cryptic claim: “Models are not lost noumenal waifs looking for someone to name them; they are constructions within our theory itself, and they have names from birth.”
Gödel’s Disjunction
Peter Koellner (Harvard)
5 pm, Friday, April 7, 2017
Philosophy 716, Columbia University
Abstract. Gödel’s disjunction asserts that either “the mind cannot be mechanized” or “there are absolutely undecidable statements.” Arguments are examined for and against each disjunct in the context of precise frameworks governing the notions of absolute provability and truth. The focus is on Penrose’s new argument, which interestingly involves type-free truth. In order to reconstruct Penrose’s argument, a system, DKT, is devised for absolute provability and type-free truth. It turns out that in this setting there are actually two versions of the disjunction and its disjuncts. The first, fully general versions end up being (provably) indeterminate. The second, restricted versions end up being (provably) determinate, and so, in this case there is at least an initial prospect of success. However, in this case it will be seen that although the disjunction itself is provable, neither disjunct is provable nor refutable in the framework.
March 2017
An Epistemic Generalization of Rationalizability
Rohit Parikh (CUNY)
4:10 pm, Friday, March 24, 2017
Faculty House, Columbia University
Abstract. Rationalizability, originally proposed by Bernheim and Pearce, generalizes the notion of Nash equilibrium. Nash equilibrium requires common knowledge of strategies. Rationalizability only requires common knowledge of rationality. However, their original notion assumes that the payoffs are common knowledge. I.e. agents do know what world they are in, but may be ignorant of what other agents are playing.
We generalize the original notion of rationalizability to consider situations where agents do not know what world they are in, or where some know but others do not know. Agents who know something about the world can take advantage of their superior knowledge. It may also happen that both Ann and Bob know about the world but Ann does not know that Bob knows. How might they act?
We will show how a notion of rationalizability in the context of partial knowledge, represented by a Kripke structure, can be developed.
November 2016
Essential Simplifications of Savage’s Subjective Probabilities System
Haim Gaifman (Columbia University) and
Yang Liu (University of Cambridge)
4:10 pm, Friday, November 18, 2016
Faculty House, Columbia University
Abstract. I shall try to cover: (I) A short outline of Savage’s system, (II) A new mathematical technique for handling “partitions with errors” that leads to a simplification that Savage tried but did not succeed in getting. (III)Some philosophical analysis of an idealized rational agent, which is commonly used as a guideline for subjective probabilities.
Some acquaintance with Savage’s system is helpful, but I added (I) in order to make for a self-contained presentation.
The talk is based on a joint work with Yang Liu. Please email Robby for an introductory section of the present draft of our paper.
October 2016
Heart of DARCness
Huw Price (University of Cambridge)
4:10 pm, Thursday, October 13, 2016
Faculty House, Columbia University
Abstract. Alan Hajek has recently criticised the thesis that Deliberation Crowds Out Prediction (renaming it the DARC thesis, for ‘Deliberation Annihilates Reflective Credence’). Hajek’s paper has reinforced my sense that proponents and opponents of this thesis often talk past one other. To avoid confusions of this kind we need to dissect our way to the heart of DARCness, and to distinguish it from various claims for which it is liable to be mistaken. In this talk, based on joint work with Yang Liu, I do some of this anatomical work. Properly understood, I argue, the heart is in good shape, and untouched by Hajek’s jabs at surrounding tissue. Moreover, a feature that Hajek takes to be problem for the DARC thesis – that it commits us to widespread ‘credal gaps’ – turns out to be a common and benign feature of a broad class of cases, of which deliberation is easily seen to be one.
September 2016
The Problem of Thinking Too Much
Persi Diaconis (Stanford University)
4:10 pm, Friday, September 16, 2016
Faculty House, Columbia University
Abstract. We all know the problem: you sit there, turning things over, and nothing gets done. Indeed, there are examples where “quick and dirty,” throwing away information, dominate. My examples will be from Bayesian statistics and the mathematics of coin tossing, but I will try to survey some of the work in psychology, philosophy, and economics.
2015 – 2016 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Yang Liu (Cambridge)
Rapporteur:
Robby Finley (Columbia)
***
May 2016
Reason-based choice and context-dependence: an explanatory framework
Christian List (London School of Economics)
4:10 pm, Friday, May 6, 2016
Faculty House, Columbia University
Abstract. We introduce a “reason-based” framework for explaining and predicting individual choices. The key idea is that a decision-maker focuses on some but not all properties of the options and chooses an option whose “motivationally salient” properties he/she most prefers. Reason-based explanations can capture two kinds of context dependent choice: (i) the motivationally salient properties may vary across choice contexts, and (ii) they may include “context-related” properties, not just “intrinsic” properties of the options. Our framework allows us to explain boundedly rational and sophisticated choice behaviour. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts.
March 2016
A New Framework for Aggregating Utility
Kenny Easwaran (Texas A&M University)
4:10 pm, Friday, March 11, 2016
Faculty House, Columbia University
Abstract. It is often assumed that a natural way to aggregate utility over multiple agents is by addition. When there are infinitely many agents, this leads to various problems. Vallentyne and Kagan approach this problem by providing a partial ordering over outcomes, rather than a numerical aggregate value. Bostrom and Arntzenius both argue that without a numerical value, it is difficult to integrate this aggregation into our best method for considering acts with risky outcomes: expected value.
My 2014 paper, “Decision Theory without Representation Theorems”, describes a project for evaluating risky acts that extends expected value to cases where it is infinite or undefined. The project of this paper is to extend this methodology in a way that deals with risk and aggregation across agents simultaneously, instead of giving priority to one or the other as Bostrom and Arntzenius require. The result is still merely a partial ordering, but since it already includes all considerations of risk and aggregation, there is no further need for particular numerical representations.
December 2015
Two Approaches to Belief Revision
Branden Fitelson (Rutgers University)
4:10 pm, Friday, December 18, 2015
Faculty House, Columbia University
Abstract. In this paper, we compare and contrast two methods for revising qualitative (viz., “full”) beliefs. The first method is a (broadly) Bayesian one, which operates (in its most naive form) via conditionalization and the minimization of expected inaccuracy. The second method is the AGM approach to belief revision. Our aim here is to provide the most straightforward explanation of the ways in which these two methods agree and disagree with each other. Ultimately, we conclude that AGM may be seen as more epistemically risk-seeking (in a sense to be made precise in the talk) than EUT (from the Bayesian perspective).
This talk is based on a joint work with Ted Shear.
November 2015
Creolizing the Web
Bud Mishra (Courant Institute, NYU)
4:10 pm, Friday, November 20, 2015
Faculty House, Columbia University
Abstract. This talk will focus on a set of game theoretic ideas with applications to Computer, Biological and Social Sciences. We will primarily rely on a realistic formulation of classical information-asymmetric signaling games, in a repeated form, while allowing the agents to dynamically vary their utility functions. We will also explore the design and creolization of a new natural language system (“InTuit”) specifically designed for the web.
The talk will build on our earlier experience in the areas of systems biology (evolutionary models), game theory, data science, model checking, causality analysis, cyber security, insider threat, virtualization and data markets.
September 2015
Awareness of Unawareness: A Theory of Decision Making in the Face of Ignorance
Edi Karni (Johns Hopkins University)
4:10 pm, Friday, September 25, 2015
Faculty House, Columbia University
Abstract. In the wake of growing awareness, decision makers anticipate that they might acquire knowledge that, in their current state of ignorance, is unimaginable. Supposedly, this anticipation manifests itself in the decision makers’ choice behavior. In this paper we model the anticipation of growing awareness, lay choice-based axiomatic foundations to subjective expected utility representation of beliefs about the likelihood of discovering unknown consequences, and assign utility to consequences that are not only unimaginable but may also be nonexistent. In so doing, we maintain the flavor of reverse Bayesianism of Karni and Vierø (2013, 2015).
2014 – 2015 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Rapporteur:
Yang Liu (Columbia)
***
May 2015
Gödel on Russell: Truth, Perception, and an Infinitary Version of the Multiple Relation Theory of Judgment
Juliet Floyd (Boston University)
4:10 pm, May 8, 2014
Faculty House, Columbia University
February 2015
Learning Conditionals and the Problem of Old Evidence
Stephan Hartmann (Ludwig Maximilian University of Munich)
4:10 pm, February 13, 2014
Faculty House, Columbia University
Abstract. The following are abstracts of two papers on which this talk is based.
The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.
Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.
December 2014
Two lessons to remember from the Sleeping Beauty problem
Teddy Seidenfeld (Carnegie Mellon University)
4:10 pm, December 5, 2014
Faculty House, Columbia University
November 2014
Conversation about Human Judgment and Decision-making
Daniel Kahneman (Princeton University)
4:10 pm, November 7, 2014
Faculty House, Columbia University
October 2014
The Rise and Fall of Accuracy-first Epistemology
Gregory Wheeler (Ludwig Maximilian University of Munich)
4:10 pm, October 31, 2014
Faculty House, Columbia University
Abstract. Accuracy-first epistemology aims to supply non-pragmatic justifications for a variety of epistemic norms. The contemporary basis for accuracy-first epistemology is Jim Joyce’s program to reinterpret de Finetti’s scoring-rule arguments in terms of a “purely epistemic” notion of “gradational accuracy.” On Joyce’s account, scoring rules are taken to measure the accuracy of an agent’s belief state with respect to the true state of the world, where accuracy is conceived to be a pure epistemic good. Joyce’s non-pragmatic vindication of probabilism, then, is an argument to the effect that a measure of gradational accuracy satisfies conditions that are close enough to those necessary to run a de Finetti style coherence argument. A number of philosophers, including Hannes Leitgeb and Richard Pettigrew, have embraced Joyce’s program whole hog. Leitgeb and Pettigrew, for instance, have argued that Joyce’s program is too lax, and they have proposed conditions that narrow down the class of admissible gradational accuracy functions, while Pettigrew and his collaborators have sought to extend the list of epistemic norms receiving an accuracy-first treatment, a program that he calls Epistemic Decision Theory.
In this talk I report on joint work with Conor Mayo-Wilson that challenges the core doctrine of Epistemic Decision Theory, namely the proposal to supply a purely non-pragmatic justification for anything resembling the Von Neumann and Morgenstern axioms for a numerical epistemic utility function. Indeed, we argue that none of the axioms necessary for Epistemic Decision Theory have a satisfactory non-pragmatic justification, and we point to reasons why to suspect that not all the axioms could be given a satisfactory non-pragmatic justification. Our argument, if sound, has consequences for recent discussions of “pragmatic encroachment”, too. For if pragmatic encroachment is a debate to do with whether there is a pragmatic component to the justification condition of knowledge, our arguments may be viewed to address the true belief condition of (fallibilist) accounts of knowledge.
2013 – 2014 Meetings
Co-Chairs:
Haim Gaifman (Columbia)
Rohit Parikh (CUNY)
Rapporteur:
Yang Liu (Columbia)
***
May 2014
The Humean Thesis on Belief
Hannes Leitgeb (Ludwig Maximilian University of Munich)
4:15 – 6:15 PM, May 2, 2014
716 Philosophy Hall, Columbia University
Abstract. I am going to make precise, and assess, the following thesis on (all-or-nothing) belief and degrees of belief: It is rational to believe a proposition just in case it is rational to have a stably high degree of belief in it.I will start with some historical remarks, which are going to motivate calling this postulate the “Humean thesis on belief”. Once the thesis has been formulated in formal terms, it is possible to derive conclusions from it. Three of its consequences I will highlight in particular: doxastic logic; an instance of what is sometimes called the Lockean thesis on belief; and a simple qualitative decision theory.
Past Events
April 2014
Causal Decision Theory and intrapersonal Nash equilibria
Arif Ahmed (University of Cambridge)
4:15 – 6:15 PM, April 4, 2014
716 Philosophy Hall, Columbia University
Abstract. Most philosophers today prefer ‘Causal Decision Theory’ to Bayesian or other non-Causal Decision Theories. What explains this is the fact that in certain Newcomb-like cases, only Causal theories recommend an option on which you would have done better, whatever the state of the world had been. But if so, there are cases of sequential choice in which the same difficulty arises for Causal Decision Theory. Worse: under further light assumptions the Causal Theory faces a money pump in these cases.
It may be illuminating to consider rational sequential choice as an intrapersonal game between one’s stages, and if time permits I will do this. In that light the difficulty for Causal Decision Theory appears to be that it allows, but its non-causal rivals do not allow, for Nash equilibria in such games that are Pareto inefficient.
November 2013
Dynamic Logics of Evidence Based Beliefs
Eric Pacuit (University of Maryland)
4:15 – 6:15 PM, Friday, November 1, 2013
Room 4419, CUNY GC
Abstract. The intuitive notion of evidence has both semantic and syntactic features. In this talk, I introduce and motivate an evidence logic for an agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the ordering are then defined in terms of this evidence structure. The semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. I will give an overview of the main axiomatizations for different classes of models and discuss logics that describe the dynamics of changing evidence, and the resulting language extensions. I will also discuss some intriguing connections with logics of belief revision.
October 2013
Knowledge is Power, and so is Communication
Rohit Parikh (CUNY)
2:00-4:00 PM, October 18, 2013
Room 4419, CUNY GC
Abstract. The BDI theory says that people’s actions are influenced by two factors, what they believe and what they want. Thus we can influence people’s actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare’s Beatrice-Benedick case in Much Ado about Nothing is an old example. Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues: a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents? b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge? c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict? When can B trust A not to lie about this matter?
The Value of Ignorance and Objective Probabilities
Haim Gaifman (Columbia University)
2:00-4:00 PM, October 18, 2013
Room 4419, CUNY GC
Abstract. There are many cases in which knowledge has negative value and a rational agent may be willing to pay for not being informed. Such cases can be classified into those which are essentially of the single-agent kind and those where the negative value of information derives from social interactions, the existence of certain institution, as well as from legal considerations. In the single-agent case the standard examples involve situations in which knowing has in itself a value, besides its instrumental cognitive value for achieving goals. But in certain puzzling examples knowing is still a cognitive instrument and yet it seems to be an obstacle. Some of these cases touch on foundational issues concerning the meaning of objective probabilities. Ellsberg’s paradox involves an example of this kind. I shall focus on some of these problems in the later part of the talk.