Formal Philosophy

Logic at Columbia University

Workshop on Group Agency and Social Epistemology

by Robby


Date: April 2nd, 10:00.  Location: Phil 716.

Philip Kitcher
Social Agnotology

Agnotology stands to ignorance as epistemology stands to knowledge. This talk will be focused on social mechanisms that sustain ignorance. Some forms of ignorance are fruitful. Most are not. I begin with some distinctions. Those distinctions are then used to consider particularly noxious (and live) forms of contemporary ignorance: specifically, kinds of ignorance that persist in spite of knowledgeable people within the community, and that endure because those who remain ignorant cannot tell who is expert with respect to an important debated issue (example – confusions about anthropocentric climate change). I argue that the intellectual credit economy, useful in sustaining valuable diversity within the scientific community, interacts with other valuable social conditions (wide recognition that scientists should inform the general public) to erode the markers of epistemic authority on which all of us depend.

Cailin O’Connor
Power, Bargaining, and Evolution

Nash famously showed that power differences across players in game theoretic models can translate into advantages in bargaining scenarios. In this talk, I discuss joint work with Justin Bruner where we explore how power, hashed out in several different ways, can lead to bargaining advantage in evolving social populations. I show how these models can inform the emergence of norms of collaboration across hierarchies in academia, and other bargaining norms in populations with more and less powerful groups.

Gregory Wheeler
Mispriced gambles: What peers learn when they disagree

The Preservation of Irrelevant Evidence (PIE) Principle maintains that a resolution strategy to peer disagreements should be able to preserve unanimous judgments of evidential irrelevance among the peers. It is well-known that no standard Bayesian resolution strategy satisfies the PIE Principle, and some — such as Carl Wagner — have argued so much the worse for PIE. In this paper we respond by giving a loss aversion argument in support of PIE and against Bayes. Another response is for each peer to dig in her heels and remain ‘steadfast’ or cave-in to a single dictator. Thomas Kelly advocates the former approach and nobody, so far as I know, the latter. In any case, we respond by arguing that a disagreement introduces to the peers the serious possibility that they may have mispriced the random quantities in dispute. The theory of imprecise probability offers tools to clear up these matters, but it also uncovers new issues that are unfamiliar to standard probabilistic modelings for a single agent. Thus, we introduce the notion of a set-based credal judgment to frame and address some of that subtleties that arise in peer disagreements.

Kevin Zollman
The credit economy and the economic rationality of science

Scientists are motivated by the credit they are given for their discoveries by their peers. Traditional theories of the scientific method in philosophy do not include this motivation, and at first blush it appears as though these theories would regard it as inappropriate. A number of scholars have suggested, however, that this motivation serves to perpetuate successful science. It has been proposed as a mechanism to encourage more scientific effort and a mechanism to effectively allocate resources between competing research programs. This paper presents an economic model of scientists’ choices in which these claims can be formalized and evaluated. Ultimately, the paper comes to mixed conclusions. The motivation for credit may help to increase scientists effort in science, but also may serve to misallocate effort between competing research programs.

Easwaran: A New Framework for Aggregating Utility

by Robby

A New Framework for Aggregating Utility
Kenny Easwaran (Texas A&M University)
4:10 pm, Friday, March 11, 2016
Faculty House, Columbia University

Abstract. It is often assumed that a natural way to aggregate utility over multiple agents is by addition. When there are infinitely many agents, this leads to various problems. Vallentyne and Kagan approach this problem by providing a partial ordering over outcomes, rather than a numerical aggregate value. Bostrom and Arntzenius both argue that without a numerical value, it is difficult to integrate this aggregation into our best method for considering acts with risky outcomes: expected value.

My 2014 paper, “Decision Theory without Representation Theorems”, describes a project for evaluating risky acts that extends expected value to cases where it is infinite or undefined. The project of this paper is to extend this methodology in a way that deals with risk and aggregation across agents simultaneously, instead of giving priority to one or the other as Bostrom and Arntzenius require. The result is still merely a partial ordering, but since it already includes all considerations of risk and aggregation, there is no further need for particular numerical representations.

Floyd: Gödel on Russell

by Yang Liu

Gödel on Russell: Truth, Perception, and an Infinitary Version of the Multiple Relation Theory of Judgment
Juliet Floyd (Boston University)
4:10 pm, May 8, 2014
Faculty House, Columbia University

Beziau : Round Squares are No Contradictions

by Yang Liu

Round Squares are No Contradictions
Jean-Yves Beziau (Federal University of Rio de Janeiro and UC San Diego)
4:10pm, Friday, April 24, 716 Philosophy Hall, Columbia University

Abstract. When talking about contradictions many people think of a round square as a typical example. We will explain in this talk that this is the result of a confusion between two notions of oppositions: contradiction and contrariety. The distinction goes back to Aristotle but it seems that up to now it has not been firmly implemented in the mind of many rational animals nor in their languages.According to the square of opposition, two propositions are contradictory iff they cannot be true and cannot be false together and they are contrary iff they cannot be true together but can be false together. The propositions “X is a square” and “X is a circle” cannot be true together according to the standard definitions of these geometrical objects, but they can be false together: X can be a triangle, something which is neither a square, nor a circle. A round square is a contrariety, not a contradiction. Aristotle insisted that there were two different kinds of oppositions, from this distinction grew a theory of oppositions that was later on shaped in a diagram by Apuleius and Boethius.It is easy to find examples of contrarieties, but not so of contradictions. Many pairs of famous oppositions are rather contraries: black and white (think of the rainbow), right and left (think of the center), day and night (think of dawn or twilight, happy and sad (think of insensibility), noise and silence (think of music), etc. Examples of “real” contradictions are generally from mathematics: odd and even, curved and straight, one and many, finite and infinite. We can indeed wonder if there are any contradictions in (non-mathematical) reality or if it is just an abstraction of our mind expressed through classical negation according to which p and ¬p is a contradiction.

Bjorndahl: Language Based Games

by Yang Liu

Language Based Games
Adam Bjorndahl (Carnegie Mellon University)
10:30 AM to 12:30 PM, Friday, March 20, 2015
Room 7395, CUNY Graduate Center

Abstract: We introduce a generalization of classical game theory wherein each player has a fixed “language of preference”: a player can prefer one state of the world to another if and only if they can describe the difference between the two in this language. The expressiveness of the language therefore plays a crucial role in determining the parameters of the game. By choosing appropriately rich languages, this framework can capture classical games as well as various generalizations thereof (e.g., psychological games, reference-dependent preferences, and Bayesian games). On the other hand, coarseness in the language—cases where there are fewer descriptions than there are actual differences to describe—offers insight into some long-standing puzzles of human decision-making.

The Allais paradox, for instance, can be resolved simply and intuitively using a language with coarse beliefs: that is, by assuming that probabilities are represented not on a continuum, but discretely, using finitely-many “levels” of likelihood (e.g., “no chance”, “slight chance”, “unlikely”, “likely”, etc.). Many standard solution concepts from classical game theory can be imported into the language-based framework by taking their epistemic characterizations as definitional. In this way, we obtain natural generalizations of Nash equilibrium, correlated equilibrium, and rationalizability. We show that there are language-based games that admit no Nash equilibria using a simple example where one player wishes to surprise her opponent. By contrast, the existence of rationalizable strategies can be proved under mild conditions. This is joint work with Joe Halpern and Rafael Pass.

Hartmann : Learning Conditionals and the Problem of Old Evidence

by Yang Liu

Learning Conditionals and the Problem of Old Evidence
Stephan Hartmann (Ludwig Maximilians-Universität München)
4:10 pm, February 13, 2015
Faculty House, Columbia University

Abstract. The following are abstracts of two papers on which this talk is based.

The Problem of Old Evidence has troubled Bayesians ever since Clark Glymour first presented it in 1980. Several solutions have been proposed, but all of them have drawbacks and none of them is considered to be the definite solution. In this article, I propose a new solution which combines several old ideas with a new one. It circumvents the crucial omniscience problem in an elegant way and leads to a considerable confirmation of the hypothesis in question.

Modeling how to learn an indicative conditional has been a major challenge for formal epistemologists. One proposal to meet this challenge is to construct the posterior probability distribution by minimizing the Kullback-Leibler divergence between the posterior probability distribution and the prior probability distribution, taking the learned information as a constraint (expressed as a conditional probability statement) into account. This proposal has been criticized in the literature based on several clever examples. In this article, we revisit four of these examples and show that one obtains intuitively correct results for the posterior probability distribution if the underlying probabilistic models reflect the causal structure of the scenarios in question.