Contents
1079 found
Order:
1 — 50 / 1079
Material to categorize
  1. Are Current AI Systems Capable of Well-Being?James Fanciullo - forthcoming - Asian Journal of Philosophy.
    Recently, Simon Goldstein and Cameron Domenico Kirk-Giannini have argued that certain existing AI systems are capable of well-being. They consider the three leading approaches to well-being—hedonism, desire satisfactionism, and the objective list approach—and argue that theories of these kinds plausibly imply that some current AI systems are capable of welfare. In this paper, I argue that the leading versions of each of these theories do not imply this. I conclude that we have strong reason to doubt that current AI systems (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect.Jan-Willem van der Rijt, Dimitri Coelho Mollo & Bram Vaassen - manuscript
    This paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots—indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings’ behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Construct Validity in Automated Counterterrorism Analysis.Adrian K. Yee - 2025 - Philosophy of Science 92 (1):1-18.
    Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real time and predict future attacks. However, current operationalizations of “terrorist”’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. "Responsibility" Plus "Gap" Equals "Problem".Marc Champagne - 2025 - In Johanna Seibt, Peter Fazekas & Oliver Santiago Quick (eds.), Social Robots with AI: Prospects, Risks, and Responsible Methods. Amsterdam: IOS Press. pp. 244–252.
    Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Explicability as an AI Principle: Technology and Ethics in Cooperation.Moto Kamiura - forthcoming - Proceedings of the 39Th Annual Conference of the Japanese Society for Artificial Intelligence, 2025.
    This paper categorizes current approaches to AI ethics into four perspectives and briefly summarizes them: (1) Case studies and technical trend surveys, (2) AI governance, (3) Technologies for AI alignment, (4) Philosophy. In the second half, we focus on the fourth perspective, the philosophical approach, within the context of applied ethics. In particular, the explicability of AI may be an area in which scientists, engineers, and AI developers are expected to engage more actively relative to other ethical issues in AI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Beauty Filters in Self-Perception: The Distorted Mirror Gazing Hypothesis.Gloria Andrada - 2025 - Topoi:1-12.
    Beauty filters are automated photo editing tools that use artificial intelligence and computer vision to detect facial features and modify them, allegedly improving a face’s physical appearance and attractiveness. Widespread use of these filters has raised concern due to their potentially damaging psychological effects. In this paper, I offer an account that examines the effect that interacting with such filters has on self-perception. I argue that when looking at digitally-beautified versions of themselves, individuals are looking at AI-curated distorted mirrors. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Preservation or Transformation: A Daoist Guide to Griefbots.Pengbo Liu - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Griefbots are chatbots modeled on the personalities of deceased individuals, designed to assist with the grieving process and, according to some, to continue relationships with loved ones after their physical passing. The essay examines the promises and perils of griefbots from a Daoist perspective. According to the Daoist philosopher Zhuangzi, death is a natural and inevitable phenomenon, a manifestation of the constant changes and transformations in the world. This approach emphasizes adaptability, flexibility, and openness to alternative ways of relating to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Artificially sentient beings: Moral, political, and legal issues.Fırat Akova - 2023 - New Techno-Humanities 3 (1):41-48.
    The emergence of artificially sentient beings raises moral, political, and legal issues that deserve scrutiny. First, it may be difficult to understand the well-being elements of artificially sentient beings and theories of well-being may have to be reconsidered. For instance, as a theory of well-being, hedonism may need to expand the meaning of happiness and suffering or it may run the risk of being irrelevant. Second, we may have to compare the claims of artificially sentient beings with the claims of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. A Roadmap for Governing AI: Technology Governance and Power-Sharing Liberalism.Danielle Allen, Woojin Lim, Sarah Hubbard, Allison Stanger, Shlomit Wagman, Kinney Zalesne & Omoaholo Omoakhalen - 2025 - AI and Ethics 4 (4).
    This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the dangers, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. 50 preguntas sobre tecnologías para un envejecimiento activo y saludable. Edición española.Francisco Florez-Revuelta, Alin Ake-Kob, Pau Climent-Perez, Paulo Coelho, Liane Colonna, Laila Dahabiyeh, Carina Dantas, Esra Dogru-Huzmeli, Hazım Kemal Ekenel, Aleksandar Jevremovic, Nina Hosseini-Kivanani, Aysegul Ilgaz, Mladjan Jovanovic, Andrzej Klimczuk, Maksymilian M. Kuźmicz, Petre Lameski, Ferlanda Luna, Natália Machado, Tamara Mujirishvili, Zada Pajalic, Galidiya Petrova, Nathalie G. S. Puaschitz, Maria Jose Santofimia, Agusti Solanas, Wilhelmina van Staalduinen & Ziya Ata Yazici - 2024 - Alicante: University of Alicante.
    Este manual sobre tecnologías para un envejecimiento activo y saludable, también conocido como Vida Asistida Activa (Active Assisted Living – AAL en sus siglas en inglés), ha sido creado como parte de la Acción COST GoodBrother, que se ha llevado a cabo desde 2020 hasta 2024. Las Acciones COST son programas de investigación europeos que promueven la colaboración internacional, uniendo a investigadores, profesionales e instituciones para abordar desafíos sociales importantes. GoodBrother se ha centrado en las cuestiones éticas y de privacidad (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Do It Yourself Content and the Wisdom of the Crowds.Dallas Amico-Korby, Maralee Harrell & David Danks - 2025 - Erkenntnis:1-29.
    Many social media platforms enable (nearly) anyone to post (nearly) anything. One clear downside of this permissiveness is that many people appear bad at determining who to trust online. Hacks, quacks, climate change deniers, vaccine skeptics, and election deniers have all gained massive followings in these free markets of ideas, and many of their followers seem to genuinely trust them. At the same time, there are many cases in which people seem to reliably determine who to trust online. Consider, for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Can Chatbots Preserve Our Relationships with the Dead?Stephen M. Campbell, Pengbo Liu & Sven Nyholm - forthcoming - Journal of the American Philosophical Association.
    Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. ¿Cómo integrar la ética aplicada a la inteligencia artificial en el currículo? Análisis y recomendaciones desde el feminismo de la ciencia y de datos.G. Arriagada Bruneau & Javiera Arias - 2024 - Revista de filosofía (Chile) 81:137-160.
    Abstract:This article examines the incorporation of applied ethics into artificial intelligence (AI) within Chilean university curricula, emphasizing the urgent need to implement an integrated framework of action. Through a documentary analysis, it becomes evident that most higher education programs do not explicitly include AI ethics courses in their curricula, highlighting the need for institutionalizing this integration systematically. In response, we propose an approach grounded in feminist science and data feminism, advocating for the inclusion of diverse perspectives and experiences in the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Moral parallax: challenges between dignity, AI, and virtual violence.Pablo De la Vega - 2024 - Trayectorias Humanas Trascontinentales 18:116-128.
    Virtual reality is not only a prowess of technological advancement and AI, but also an element that extends the horizons of human existence and complicates the way of approaching various phenomena of the physical world, for example, violence. Its practice in virtuality leads to a series of challenges, especially when virtual reality is considered as genuine reality. This text delves into virtual violence, the influence of AI on it and the problems that its conception implies. To analyze this phenomenon, parallax (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. A hybrid marketplace of ideas.Tomer Jordi Chaffer, Dontrail Cotlage & Justin Goldston - manuscript
    The convergence of humans and artificial intelligence (AI) systems introduces new dynamics into the cultural and intellectual landscape. Complementing emerging cultural evolution concepts such as machine culture, AI agents represent a significant techno-sociological development, particularly within the anthropological study of Web3 as a community focused on decentralization through blockchain. Despite their growing presence, the cultural significance of AI agents remains largely unexplored in academic literature. Toward this end, we conceived hybrid netnography, a novel interdisciplinary approach that examines the cultural and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.Gabriela Arriagada-Bruneau, Claudia López & Alexandra Davidoff - 2024 - Science and Engineering Ethics 31 (1):1-29.
    We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrence linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Book review: Nyholm, Sven (2023): This is technology ethics. An introduction. [REVIEW]Michael W. Schmidt - 2024 - TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis 33 (3):80–81.
    Have you been surprised by the recent development and diffusion of generative artificial intelligence (AI)? Many institutions of civil society have been caught off guard, which provides them with motivation to think ahead. And as many new plausible pathways of socio-technical development are opening up, a growing interest in technology ethics that addresses our corresponding moral uncertainties is warranted. In Sven Nyholm’s words, “[t]he field of technology ethics is absolutely exploding at the moment” (p. 262), and so the publication of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. AI Romance and Misogyny: A Speech Act Analysis.A. G. Holdier & Kelly Weirich - forthcoming - Oxford Intersections: Ai in Society (Relationships).
    Through the lens of feminist speech act theory, this paper argues that artificial intelligence romance systems objectify and subordinate nonvirtual women. AI romance systems treat their users as consumers, offering them relational invulnerability and control over their (usually feminized) digital romantic partner. This paper argues that, though the output of AI chatbots may not generally constitute speech, the framework offered by an AI romance system communicates an unjust perspective on intimate relationships. Through normalizing controlling one’s intimate partner, these systems operate (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Critical Provocations for Synthetic Data.Daniel Susser & Jeremy Seeman - 2024 - Surveillance and Society 22 (4):453-459.
    Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the use (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Towards a Unified List of Ethical Principles for Emerging Technologies. An Analysis of Four European Reports on Molecular Biotechnology and Artificial Intelligence,.Elisa Orrù & Joachim Boldt - 2022 - Sustainable Futures 4:1-14.
    Artificial intelligence (AI) and molecular biotechnologies (MB) are among the most promising, but also ethically hotly debated emerging technologies. In both fields, several ethics reports, which invoke lists of ethics principles, have been put forward. These reports and the principles lists are technology specific. This article aims to contribute to the ongoing debate on ethics of emerging technologies by comparatively analysing four European ethics reports from the two technology fields. Adopting a qualitative and in-depth approach, the article highlights how ethics (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. (1 other version)Biomimicry and AI-Enabled Automation in Agriculture. Conceptual Engineering for Responsible Innovation.Marco Innocenti - 2025 - Journal of Agricultural and Environmental Ethics 38 (2):1-17.
    This paper aims to engineer the concept of biomimetic design for its application in agricultural technology as an innovation strategy to sustain non-human species’ adaptation to today’s rapid environmental changes. By questioning the alleged intrinsic morality of biomimicry, a formulation of it is sought that goes beyond the sharp distinction between nature as inspiration and the human field of application of biomimetic technologies. After reviewing the main literature on Responsible Innovation, we support Vincent Blok’s “eco-centric” perspective on biomimicry, which considers (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. As máquinas podem cuidar?E. M. Carvalho - 2024 - O Que Nos Faz Pensar 31 (53):6-24.
    Applications and devices of artificial intelligence are increasingly common in the healthcare field. Robots fulfilling some caregiving functions are not a distant future. In this scenario, we must ask ourselves if it is possible for machines to care to the extent of completely replacing human care and if such replacement, if possible, is desirable. In this paper, I argue that caregiving requires know-how permeated by affectivity that is far from being achieved by currently available machines. I also maintain that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Virtues for AI.Jakob Ohlhorst - manuscript
    Virtue theory is a natural approach towards the design of artificially intelligent systems, given that the design of artificial intelligence essentially aims at designing agents with excellent dispositions. This has led to a lively research programme to develop artificial virtues. However, this research programme has until now had a narrow focus on moral virtues in an Aristotelian mould. While Aristotelian moral virtue has played a foundational role for the field, it unduly constrains the possibilities of virtue theory for artificial intelligence. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. AI and Democratic Equality: How Surveillance Capitalism and Computational Propaganda Threaten Democracy.Ashton Black - 2024 - In Bernhard Steffen (ed.), Bridging the Gap Between AI and Reality. Springer Nature. pp. 333-347.
    In this paper, I argue that surveillance capitalism and computational propaganda can undermine democratic equality. First, I argue that two types of resources are relevant for democratic equality: 1) free time, which entails time that is free from systemic surveillance, and 2) epistemic resources. In order for everyone in a democratic system to be equally capable of full political participation, it’s a minimum requirement that these two resources are distributed fairly. But AI that’s used for surveillance capitalism can undermine the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Publishing Robots.Nicholas Hadsell, Rich Eva & Kyle Huitt - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    If AI can write an excellent philosophy paper, we argue that philosophy journals should strongly consider publishing that paper. After all, AI stands to make significant contributions to ongoing projects in some subfields, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We also propose the Sponsorship Model of AI journal refereeing to mitigate any costs associated with our view. This model requires (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. A minimal dose of self-reflective humor in Wild Wise Weird: The Kingfisher story collection.Manh-Tung Ho - manuscript
    In this essay, I review one of my beloved fictional titles, Wild Wise Weird: The Kingfisher Story collection. The minimal sense of humor and satire in storytelling of Wild Wise Weird are sure to bring readers smiles, better yet, moments of quiet reflection, a much under-appreciated remedy in the world driven almost insane with the abundance of information co-created with AI technologies. I hope to deliver justice to the book.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Multimodal Artificial Intelligence in Medicine.Joshua August Skorburg - forthcoming - Kidney360.
    Traditional medical Artificial Intelligence models, approved for clinical use, restrict themselves to single-modal data e.g. images only, limiting their applicability in the complex, multimodal environment of medical diagnosis and treatment. Multimodal Transformer Models in healthcare can effectively process and interpret diverse data forms such as text, images, and structured data. They have demonstrated impressive performance on standard benchmarks like USLME question banks and continue to improve with scale. However, the adoption of these advanced AI models is not without challenges. While (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Automated Influence and Value Collapse.Dylan J. White - 2024 - American Philosophical Quarterly 61 (4):369-386.
    Automated influence is one of the most pervasive applications of artificial intelligence in our day-to-day lives, yet a thoroughgoing account of its associated individual and societal harms is lacking. By far the most widespread, compelling, and intuitive account of the harms associated with automated influence follows what I call the control argument. This argument suggests that users are persuaded, manipulated, and influenced by automated influence in a way that they have little or no control over. Based on evidence about the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Sự gia tăng của AI tạo sinh và những rủi ro tiềm ẩn cho con người.Hoang Tung-Duong, Dang Tuan-Dung & Manh-Tung Ho - 2024 - Tạp Chí Thông Tin Và Truyền Thông 9 (9/2024):66-73.
    Sự xuật hiện của các công cụ AI tạo sinh trên nền tảng các mô hình ngôn ngữ lớn (LLMs) đã đem đến một công cụ mới cho con người, đặc biệt là trong các ngành sư phạm, báo chí, nhưng chúng cũng đem đến nhiều vấn đề Trong bài viết này, nhóm tác giả sẽ chỉ ra những những bất cập mới xuất hiện hoặc những vấn đề đã tồn tại nhưng có nguy cơ được đẩy lên cao hơn (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. The Harm of Social Media to Public Reason.Paige Benton & Michael W. Schmidt - 2024 - Topoi 43 (5): 1433–1449.
    It is commonly agreed that so-called echo chambers and epistemic bubbles, associated with social media, are detrimental to liberal democracies. Drawing on John Rawls’s political liberalism, we offer a novel explanation of why social media platforms amplifying echo chambers and epistemic bubbles are likely contributing to the violation of the democratic norms connected to the ideal of public reason. These norms are clarified with reference to the method of (full) reflective equilibrium, which we argue should be cultivated as a civic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Implicaciones de la tecnosecuritización en las relaciones internacionales contemporáneas.Felix D. Andueza Araque - 2022 - Dissertation, Pontificia Universidad Católica Del Ecuador
    In recent years there has been a transition within society towards governments much more permeated by technology. One of the areas where this technological increase has been seen is in security processes. Currently, under the justification of a safer world, perpetual surveillance is given to the population through facial recognition cameras and data collection. For this reason, this research work seeks to explain how securitisation processes have become more complex to give way to much broader subjective prevention of security problems. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Should You Trust Your Voice Assistant? It’s Complicated, but No.Filippos Stamatiou & Xenofon Karakonstantis - 2024 - In Florian Westphal, Einav Peretz-Andersson, Maria Riveiro, Kerstin Bach & Fredrik Heintz (eds.), 14th Scandinavian Conference on Artificial Intelligence SCAI 2024. Linköping, Sweden: Linköping Electronic Conference Proceedings 208.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. 50 questions on Active Assisted Living technologies. Global edition.Francisco Florez-Revuelta, Alin Ake-Kob, Pau Climent-Perez, Paulo Coelho, Liane Colonna, Laila Dahabiyeh, Carina Dantas, Esra Dogru-Huzmeli, Hazim Kemal Ekenel, Aleksandar Jevremovic, Nina Hosseini-Kivanani, Aysegul Ilgaz, Mladjan Jovanovic, Andrzej Klimczuk, Maksymilian M. Kuźmicz, Petre Lameski, Ferlanda Luna, Natália Machado, Tamara Mujirishvili, Zada Pajalic, Galidiya Petrova, Nathalie G. S. Puaschitz, Maria Jose Santofimia, Agusti Solanas, Wilhelmina van Staalduinen & Ziya Ata Yazici - 2024 - Alicante: University of Alicante.
    This booklet on Active Assisted Living (AAL) technologies has been created as part of the GoodBrother COST Action, which has run from 2020 to 2024. COST Actions are European research programs that promote collaboration across borders, uniting researchers, professionals, and institutions to address key societal challenges. GoodBrother focused on ethical and privacy concerns surrounding video and audio monitoring in care settings. The aim was to ensure that while AAL technologies help older adults and vulnerable individuals, their privacy and data protection (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. “Der Mann mit Eigenschaften”, review of Joseph LeDoux: Im Netz der Persönlichkeit: Wie unser Selbst entsteht [Synaptic Self],. [REVIEW]Vincent C. Müller - 2004 - Süddeutsche Zeitung 2014 (14.01.2004):14.
    Review of Joseph LeDoux: Das Netz der Persönlichkeit. Wie unser Selbst entsteht. Walter Verlag, Düsseldorf 2003. 510 Seiten (mit Abbildungen), 39,90 Euro. - Der eine Mensch ist mißtrauisch, der nächste leichtgläubig, diese ist warmherzig, jene kaltschnäuzig. Viele haben Charakter, manche sogar Persönlichkeit. Wie kommt es dazu? In seinem neuen Buch untersucht der Neurowissenschaftler Joseph LeDoux wie unser Selbst entsteht. In dem sehr lesbaren und angenehm übersetzten Werk wird anschaulich und detailliert berichtet, wie sich in unserem Gehirn die Charakteristika eines Individuums (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
Algorithmic Fairness
  1. The Representative Individuals Approach to Fair Machine Learning.Clinton Castro & Loi Michele - forthcoming - AI and Ethics.
    The demands of fair machine learning are often expressed in probabilistic terms. Yet, most of the systems of concern are deterministic in the sense that whether a given subject will receive a given score on the basis of their traits is, for all intents and purposes, either zero or one. What, then, can justify this probabilistic talk? We argue that the statistical reference classes used in fairness measures can be understood as defining the probability that hypothetical persons, who are representative (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - 2024 - Analysis 84 (4):778-787.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalized Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalized Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  3. Aspirational Affordances of AI.Sina Fazelpour & Meica Magnani - manuscript
    As artificial intelligence (AI) systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how technologies of representation---paintings, literature, photographs, films, or video games---shape the exercising of imagination, particularly as it pertains to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Diving into Fair Pools: Algorithmic Fairness, Ensemble Forecasting, and the Wisdom of Crowds.Rush T. Stewart & Lee Elkin - forthcoming - Analysis.
    Is the pool of fair predictive algorithms fair? It depends, naturally, on both the criteria of fairness and on how we pool. We catalog the relevant facts for some of the most prominent statistical criteria of algorithmic fairness and the dominant approaches to pooling forecasts: linear, geometric, and multiplicative. Only linear pooling, a format at the heart of ensemble methods, preserves any of the central criteria we consider. Drawing on work in the social sciences and social epistemology on the theoretical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set.Gordon Dai, Pavan Ravishankar, Rachel Yuan, Daniel B. Neill & Emily Black - manuscript
    When selecting a model from a set of equally performant models, how much unfairness can you really reduce? Is it important to be intentional about fairness when choosing among this set, or is arbitrarily choosing among the set of “good” models good enough? Recent work has highlighted that the phenomenon of model multiplicity—where multiple models with nearly identical predictive accuracy exist for the same task—has both positive and negative implications for fairness, from strengthening the enforcement of civil rights law in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - 2025 - Philosophical Studies 182 (1):25-53.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Trustworthy use of artificial intelligence: Priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence.Jan Voosholz, Maximilian Poretschkin, Frauke Rostalski, Armin B. Cremers, Alex Englander, Markus Gabriel, Hecker Dirk, Michael Mock, Julia Rosenzweig, Joachim Sicking, Julia Volmer, Angelika Voss & Stefan Wrobel - 2019 - Fraunhofer Institute for Intelligent Analysis and Information Systems Iais.
    This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialogue between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, autonomy (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Vertrauenswürdiger Einsatz von Künstlicher Intelligenz.Jan Voosholz, Maximilian Poretschkin, Frauke Rostalski, Armin B. Cremers, Alex Englander, Markus Gabriel, Dirk Hecker, Michael Mock, Julia Rosenzweig, Joachim Sicking, Julia Volmer, Angelika Voss & Stefan Wrobel - 2019 - Fraunhofer-Institut Für Intelligente Analyse- Und Informationssysteme Iais.
    Die vorliegende Publikation dient als Grundlage für die interdisziplinäre Entwicklung einer Zertifizierung von Künstlicher Intelligenz. Angesichts der rasanten Entwicklung von Künstlicher Intelligenz mit disruptiven und nachhaltigen Folgen für Wirtschaft, Gesellschaft und Alltagsleben verdeutlicht sie, dass sich die hieraus ergebenden Herausforderungen nur im interdisziplinären Dialog von Informatik, Rechtswissenschaften, Philosophie und Ethik bewältigen lassen. Als Ergebnis dieses interdisziplinären Austauschs definiert sie zudem sechs KI-spezifische Handlungsfelder für den vertrauensvollen Einsatz von Künstlicher Intelligenz: Sie umfassen Fairness, Transparenz, Autonomie und Kontrolle, Datenschutz sowie Sicherheit und (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1079