查看更多>>摘要:An individual human has value partly in virtue of their uniqueness. Personal avatar technology-technology which creates a digital replication of a real person-ap-pears to have the potential to undermine that value. Here I explore if and how avatars might make humans less valuable by undermining the value that a human gains from being unique. Ultimately, I conclude that, while avatars cannot make hu-mans no longer unique, they could significantly undermine the value that we place on human uniqueness. First, I argue that a qualitative model of uniqueness cannot account for the unique value that a person has. This leads to the significant and surprising claim that necessarily unique properties of humans cannot accommodate the value arising from human uniqueness: humans have unique value in virtue of being contingently irreplaceable. I explore how the use of personal avatars might undermine or even destroy that value. Finally, I consider further applications of the theory of unique human value, including how it might explain and accommodate our attachment to personal avatars themselves.
查看更多>>摘要:This article delivers an account of what it is for a physical system to be programmable. Despite its significance in computing and beyond, today’s philosophical discourse on programmability is impoverished. This contribution offers a novel definition of physical programmability as the degree to which the selected operations of an automaton can be reconfigured in a controlled way. The framework highlights several key insights: the constrained applicability of physical programmability to material automata, the characterization of selected operations within the neo-mechanistic framework, the understanding of controlled reconfiguration through the causal theory of interventionism, and the recognition of physical programmability as a gradual notion. The account can be used to individuate programmable (computing) systems and taxonomize concrete systems based on their programmability. The article closes by posing some open questions and offering avenues for future research in this domain.
查看更多>>摘要:This article challenges the dominant ‘black box’ metaphor in critical algorithm studies by proposing a phenomenological framework for understanding how so-cial media algorithms manifest themselves in user experience. While the black box paradigm treats algorithms as opaque, self-contained entities that exist only ‘behind the scenes’, this article argues that algorithms are better understood as genetic phe-nomena that unfold temporally through user-platform interactions. Recent schol-arship in critical algorithm studies has already identified various ways in which algorithms manifest in user experience: through affective responses, algorithmic self-reflexivity, disruptions of normal experience, points of contention, and folk theories. Yet, while these studies gesture toward a phenomenological understanding of algorithms, they do so without explicitly drawing on phenomenological theory. This article demonstrates how phenomenology, particularly a Husserlian genetic approach, can further conceptualize these already-documented algorithmic encoun-ters. Moving beyond both the paradigm of artifacts and static phenomenological approaches, the analysis shows how algorithms emerge as inherently relational processes that co-constitute user experience over time. By reconceptualizing algo-rithms as genetic phenomena rather than black boxes, this paper provides a theo-retical framework for understanding how algorithmic awareness develops from pre-reflective affective encounters to explicit folk theories, while remaining inextricably linked to users’ self-understanding. This phenomenological framework contributes to a more nuanced understanding of algorithmic mediation in contemporary social media environments and opens new pathways for investigating digital technologies.
查看更多>>摘要:My comments begin with noting the primary, foundational contributions to computer ethics (CE) in its first decade or so made in Jim Moor’s 1985 paper “What is Computer Ethics,” I then turn to his still earlier paper, “Are There Decisions Computers Should Never Make” (1979). As with his 1985 paper, Moor deftly identified in 1979 several central elements that continue to define contemporary discussions of especially Artificial Intelligence (AI) and Machine Learning (ML) Systems-discussions I approach primarily in terms of phronēis as a form of selfcorrecting ethical judgment. Thirdly, I note his equally pivotal contributions to theories of privacy as these have unfolded over the past 20 years or so. While by no means a complete summary of Moor’s contributions to these fields, these comments aim to foreground some of his most central and definitive ones from my perspective as a scholar and researcher in these domains.
查看更多>>摘要:The advent of quantum computing will compromise current asymmetric cryptography. Awaiting this moment, global superpowers are routinely collecting and storing encrypted data, so as to later decrypt it once sufficiently strong quantum computers are in place. We argue that this situation gives rise to a new mode of global surveillance that we refer to as a quantum panopticon. Unlike traditional forms of panoptic surveillance, the quantum panopticon introduces a temporal axis, whereby data subjects’ future pasts can be monitored from an unknown “superposition” in the quantum future. It also introduces a new level of uncertainty, in that the future watchman’s very existence becomes a function of data subjects’ efforts to protect themselves from being monitored in the present. Encryption may work as a momentary protection, but increases the likelihood of long-term preservation for future decryption, because encrypted data is stored longer than plaintext data. To illustrate the political and ethical aspects of these features, we draw on cryptographic as well as theoretical surveillance literature and call for urgent consideration of the wider implications of quantum computing for the global surveillance landscape.
查看更多>>摘要:Around the turn of this century a number of emerging technologies were in the news, raising some potentially significant ethical questions. Given that they were emerging they as yet had no, or very few, impacts, so it was not obvious how best to assess them ethically. Jim Moor addressed this issue and offered three sugges-tions for a better ethics for emerging technologies. His first was that ethics should be dynamic, that is, it should be an ongoing process before, during and after the technological development. Second, there should be close collaboration between the researchers and developers on the one hand, and ethicists and social scientists on the other. Finally, ethical analyses should be more sophisticated. In this paper I argue that environmental issues and the questioning of core ethical values should be a central part of the ethics of emerging technologies, using AI examples. Given the kind of beings that we are, technology and the environment are closely connected for human flourishing.
查看更多>>摘要:'Are There Decisions Computers Should Never Make?’ is one of James H. Moor's many groundbreaking papers in computer ethics, and it is one that I have thought a good deal about since its publication in 1979 and especially in recent years in relation to current discourse on AI. In this paper, I describe Jim’s analysis, reflect on its relevance to current thinking about AI, and take issue with several of his arguments. The conclusion of Jim’s paper is that computers should never choose human values and goals. I suggest that this is not possible because of the nature of values and how they are intertwined in computer decision making.
查看更多>>摘要:This paper is intended as a tribute to the late James Moor. An esteemed Dartmouth professor, who published in many areas of philosophy, including logic, Moor is perhaps best remembered today for his pioneering work in the field of comput-er ethics. His seminal (and award-winning) article, “What Is Computer Ethics?” (Metaphilosophy, 1985) was highly influential both in defining and shaping the then nascent field of computer ethics. Many other computer-ethics-related papers followed over the next quarter century, in which Moor examined a range of topics – from moral responsibility to autonomy to privacy in the context of computing and emerging technologies, including nanotechnology and AI. And while the insights and frameworks put forth in many of his published works have received the ac-claim they deserve, Moor’s contribution to the privacy literature remains, in my view, underappreciated. In trying to show why his privacy theory deserves much more attention than received to date, I also briefly describe the evolution of Moor’s position on privacy – from his earlier publications on that topic to a comprehensive and systematic privacy framework. I then suggest that a further exploration of his privacy theory would benefit researchers working in technology ethics in general, and AI ethics in particular. Finally, I encourage privacy scholars to take a closer look at Moor’s privacy framework to see whether they might be able to tease out and disclose some potential insights and features that may still be embedded in that robust theory of privacy.
查看更多>>摘要:This article traces the historical development of the ethics of emerging technolo-gies. It argues that during the late 2000s and 2010s, the field of ethics of technology transformed from a fragmented, reactive, and methodologically underdeveloped discipline focused on mature technologies and lacking policy orientation into a more cohesive, proactive, methodologically sophisticated, and policy-focused field with a strong emphasis on emerging technologies. An agenda for this transition was set in Jim Moor’s seminal publication “Why We Need Better Ethics for Emerging Technologies”.
Christopher StarkeTobias BlankeNatali HelbergerSonja Smets...
22.1-22.5页
查看更多>>摘要:As AI becomes increasingly ingrained in societies worldwide, it shapes access to resources, opportunities, and social outcomes. Numerous examples have shown that it can also reinforce and perpetuate existing inequalities. Yet, AI also has the potential to enhance fairness. This special issue goes beyond simply analyzing these fairness-related challenges. Instead, it aims to provide deeper insights into solutions for mitigating bias and explores how AI can be harnessed to create fairer societies. Achieving this requires collaboration across academic disciplines, as AI fairness is not just a technical issue but one that is deeply connected to societal values and ethical considerations. The contributions in this issue highlight the importance of an interdisciplinary approach, demonstrating that fairness in AI cannot be understood in isolation from broader social, cultural, and historical contexts. It is not enough to evaluate AI systems based on present-day standards; we must also consider how societal values evolve over time. By doing so, we can ensure that AI systems are designed not only to address existing biases but also to anticipate and adapt to the changing needs of society, ultimately helping to build a more just and inclusive future.