首页期刊导航|AI & society
期刊信息/Journal information
AI & society
Springer-Verlag
AI & society

Springer-Verlag

季刊

0951-5666

AI & society/Journal AI & societyESCI
正式出版
收录年代

    Beyond symbol processing: the embodied limits of LLMs and the gap between AI and human cognition

    Rasmus Gahrn-Andersen
    3105-3107页
    查看更多>>摘要:Luciano Floridi has argued that AI, particularly Large Language Models (LLMs) like ChatGPT and Bard, exhibit "agency without intelligence." He points to how LLMs manage texts statistically, without understanding their content. In an example, Floridi reflects on his experience prompting a previous version of ChatGPT with a question: "What is the name of Laura's mother's only daughter?" Using the Saxon genitive, the model gave an "idiotic" response, demonstrating its limited understanding. He notes, "LLMs keep learning most 'errors,' which are like zero-day exploits," showing how even apparent shortcomings can be temporary (Floridi 2023, p. 15).

    On the individuation of complex computational models: Gilbert Simondon and the technicity of Al

    Susana Aires
    3109-3122页
    查看更多>>摘要:The proliferation of AI systems across all domains of life as well as the complexification and opacity of algorithmic techniques, epitomised by the bourgeoning field of Deep Learning (DL), call for new methods in the Humanities for reflecting on the techno-human relation in a way that places the technical operation at its core. Grounded on the work of the philosopher of technology Gilbert Simondon, this paper puts forward individuation theory as a valuable approach to reflect on contemporary information technologies, offering an analysis of the functioning of deep neural networks (DNNs), a type of data-driven computational models at the core of major breakthroughs in AI. The purpose of this article is threefold: (1) to demonstrate how a joint reading of Simondon's mechanology and individuation theory, foregrounded in the Simondonian concept of information, can cast new light on contemporary algorithmic techniques by considering their situated emergence as opposed to technical lineage; (2) to suspend a predictive framing of AI systems, particularly DL techniques, so as to probe into their technical operation, accounting for the data-driven individuation of these models and the integration of potentials as functionality; and finally, (3) to argue that individuation theory might in fact de-individuate AI, in the sense of disassembling the already-there, the constituted, paving the way for questioning the potentialities for data and their algorithmic relationality to articulate the unfolding of everyday life.

    Toward an empathy-based trust in human-otheroid relations

    Abootaleb Safdari
    3123-3138页
    查看更多>>摘要:The primary aim of this paper is twofold: firstly, to argue that we can enter into relation of trust with robots and AI systems (automata); and secondly, to provide a comprehensive description of the underlying mechanisms responsible for this relation of trust. To achieve these objectives, the paper first undertakes a critical examination of the main arguments opposing the concept of a trust-based relation with automata. Showing that these arguments face significant challenges that render them untenable, it thereby prepares the ground for the subsequent positive analysis, proposing a framework in which these challenges can be addressed . According to this framework trust does not originate from mere reliability, but rather from an empathic relation with automata. This initial empathic relation elevates the automata to the status of what I will term "Oth-eroids." The paper then explores how this human-Otheroid relationship inherently possesses the seeds for the development of trust. Finally, it examines how these seeds can grow into a basic form of trust with Otheroids through the establishment of a rich history of interaction.

    ChatGPT as imperfect rhetorical tool in public policy

    Marcel Becker
    3139-3148页
    查看更多>>摘要:The introduction of Large Language Models that generate texts, such asGPT-4 and Bard, has met a lot of enthusiasm. A typical media post by the end of 2022, when ChatGPT was introduced read like: "I asked Chat GPT to write me a text about ... It did an amazing job, (certain) way better than I would have done". It is of course interesting to notice that apparently people take themselves as frames of reference, but the really striking thing is that LLMs appears to gnawn at the very foundation of any domain of work. The systems are used in a wide variety of domains: they solve puzzles (Noever and Burdick 2021), explain why a newly composed joke is funny (Agiiera Y Argas 2024), and contribute to scientific debates. (Diwedi 2023) Scientists, journalists, managers, teachers, and all other kinds of professions are fascinated bv the caoacities of the svstem.

    Artificing intelligence: from isolating IQ to amoral Al

    Colin Koopman
    3149-3161页
    查看更多>>摘要:Our contemporary moment is saturated by investments in artificial intelligence (AI). AI is not without its critics, many of whom hope to show why machines simply cannot be intelligent. Yet AFs claim to intelligence is not dubious. Rather, what requires examination is the assumption that independent intelligence can help resolve our ethical-political problems instead of making them worse. Consider that AI exhibits a pair of tendencies commonly believed to be contradictory: success in passing validated behavioral tests of intelligence and manifesting ethical failures in the form of discriminatory and biased data analyses. The history of early-twentieth century psychometric sciences helps us see that these tendencies are far from contradictory. For that history shows that psychometricians designed tests in a way that relied upon the separation of intelligence from the measure of moral traits. This paper tracks the emergence of technologies and sciences of intelligence through the work of Lewis Terman and others as they disseminated their testing techniques in the domain of education in the 1920s. The wide deployment of intelligence tests in subsequent decades created the historical conditions for the viability of the inaugural work of Alan Turing on machine intelligence in the 1950s and beyond. The result is today's amoral AI.

    Competing narratives in Al ethics: a defense of sociotechnical pragmatism

    David S. WatsonJakob MoekanderLuciano Floridi
    3163-3185页
    查看更多>>摘要:Several competing narratives drive the contemporary AI ethics discourse. At the two extremes are sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better technology; and sociotechnical skepticism, which highlights the unacceptable risks AI systems pose. While both narratives have their merits, they are ultimately reductive and limiting. As a constructive synthesis, we introduce and defend sociotechnical pragmatism—a narrative that emphasizes the central role of context and human agency in designing and evaluating emerging technologies. In doing so, we offer two novel contributions. First, we demonstrate how ethical and epistemological considerations are intertwined in the AI ethics discourse by tracing the dialectical interplay between dogmatic and skeptical narratives across disciplines. Second, we show through examples how sociotechnical pragmatism does more to promote fair and transparent AI than dogmatic or skeptical alternatives. By spelling out the assumptions that underpin sociotechnical pragmatism, we articulate a robust stance for policymakers and scholars who seek to enable societies to reap the benefits of AI while managing the associated risks through feasible, effective, and proportionate governance.

    Aiding narrative generation in collaborative data utilization by humans and AI agents

    Kaira SekiguchiYukio Ohsawa
    3187-3208页
    查看更多>>摘要:Narrative generation is growing in importance for data utilization, particularly in the context of co-creation with artificial intelligence (AI) agents. Narratives can, for example, bridge theoretical objects with social understanding and promote human actions. Furthermore, clarifying the narrative generation mechanism is essential for constructing effective relationships between humans and AI agents. However, the narrative generation mechanism in data utilization processes has not been fully elucidated. In this study, we developed a framework called the hierarchical narrative representation (HieNaR) to systematize the structure of narrative generation in data utilization processes. HieNaR comprises twelve levels, ranging from the set of texts down to the particle level (e.g., text, sentence, word, character, and stroke), allowing for a comprehensive analysis of narrative structures. We evaluated the usefulness of HieNaR through case studies, examining both individual user experiences and collaborative work between humans and an AI agent. The results demonstrated that the data utilization process interprets data by inquiring whether it satisfies higher-level expectations. In collaboration, AI agents can be understood as co-creative partners in data utilization, possessing their own worldviews. Through these findings, this study not only elucidates the mechanism of narrative generation in data utilization processes but also provides a foundation for improving human-AI collaboration.

    The democratic ethics of artificially intelligent polling

    Roberto CerinaElise Roumeas
    3209-3223页
    查看更多>>摘要:This paper examines the democratic ethics of artificially intelligent polls. Driven by machine learning, AI electoral polls have the potential to generate predictions with an unprecedented level of granularity. We argue that their predictive power is potentially desirable for electoral democracy. We do so by critically engaging with four objections: (1) the privacy objection, which focuses on the potential harm of the collection, storage, and publication of granular data about voting preferences; (2) the autonomy objection, which argues that polls are an obstacle to independently formed judgments; (3) the tactical voting objection, which argues that voting strategically on the basis of polls is troublesome; and finally (4) the manipulation objection, according to which malicious actors could systematically bias predictions to alter voting behaviours.

    The role of generative AI in academic and scientific authorship: an autopoietic perspective

    Steven WatsonErik BrezovecJonathan Romic
    3225-3235页
    查看更多>>摘要:The integration of generative artificial intelligence (AI), particularly large language models like ChatGPT, presents new challenges as well as possibilities for scientific authorship. This paper draws on social systems theory to offer a nuanced understanding of the interplay between technology, individuals, society and scholarly authorial practices. This contrasts with orthodoxy, where individuals and technology are treated as essentialized entities. This approach offers a critique of the binary positions of sociotechnological determinism and accelerationist instrumentality while still acknowledging that generative AI presents profound challenges to existing practices and meaning making in scientific scholarship. This holistic treatment of authorship, integrity, and technology involves comprehending the historical and evolutionary entanglement of scientific individuality, scientific practices, and meaning making with technological innovation. This addresses current needs for more robust theoretical approaches to address the challenges confronted by academicians, institutions, peer review, and publication processes. Our analysis aims to contribute to a more sophisticated discourse on the ethical and practical implications of AI in scientific research.

    Redefining intelligence: collaborative tinkering of healthcare professionals and algorithms as hybrid entity in public healthcare decision-making

    Roanne van Voorst
    3237-3248页
    查看更多>>摘要:This paper analyzes the collaboration between healthcare professionals and algorithms in making decisions within the realm of public healthcare. By extending the concept of 'tinkering' from previous research conducted by philosopher Mol (Care in practice. On tinkering in clinics, homes and farms Verlag, Amsterdam, 2010) and anthropologist Pols (Health Care Anal 18: 374-388, 2009), who highlighted the improvisational and adaptive practices of healthcare professionals, this paper reveals that in the context of digitalizing healthcare, both professionals and algorithms engage in what I call 'collaborative tinkering' as they navigate the intricate and unpredictable nature of healthcare situations together. The paper draws upon an idea that is increasingly common in academic literature, namely that healthcare professionals and the algorithms they use can form a hybrid decision-making entity, challenging the conventional notion of agency and intelligence as being exclusively confined to individual humans or machines. Drawing upon an international, ethnographic study conducted in different hospitals around the world, the paper describes empirically how humans and algorithms come to decisions together, making explicit how, in the practice of daily work, agency and intelligence are distributed among a range of actors, including humans, technologies, knowledge resources, and the spaces where they interact. The concept of collaborative tinkering helps to make explicit how both healthcare professionals and algorithms engage in adaptive improvisation. This exploration not only enriches the understanding of collaborative dynamics between humans and AI but also problematizes the individualistic conception of AI that still exists in regulatory frameworks. By introducing empirical specificity through ethnographic insights and employing an anthropological perspective, the paper calls for a critical reassessment of current ethical and policy frameworks governing human-AI collaboration in healthcare, thereby illuminating direct implications for the future of AI ethics in medical practice.