Special Forum: AI through the ΑΩ: Theological Librarians Interact with Artificial Intelligence

Artificial Intelligence, Language, and Humanization in the Academic Library

Computational linguist and cognitive scientist Emily Bender (2024) has articulated six ways in which both the development and the marketing of technologies dubbed “artificial intelligence” (AI) produce dehumanization: “the computational metaphor,” “digital physiognomy,” “ground lies,” “irrelationality,” “ghost work,” and “reinforcement of the white racial frame.” This essay focuses on the first of these mechanisms—the computational metaphor—and reflects on its implications for academic library practice, particularly concerning information literacy. I also offer initial ideas for resisting dehumanization in academic libraries by using the lens of virtue information literacy (VIL) developed by Bivens-Tatum (2022).

Bender’s critique of the computational metaphor draws from Baria and Cross (2021), who express it as a bidirectional conceptual metaphor in the sense of Lakoff and Johnson (1980): the brain is a computer, and the computer is a brain (the use of small caps here follows the tradition of conceptual metaphor theory). As is characteristic of conceptual metaphors, the computational metaphor surfaces in language, including in the term “artificial intelligence” (Baria and Cross 2021, 2), since this term suggests that computational models may be endowed with intellectual capabilities qualitatively comparable to human intelligence.

Baria and Cross (2021) contend that the computational metaphor, which (in the form of the brain is a computer) has been a persistent and useful idea in the field of neuroscience for several decades, also tends to “afford the human mind less complexity than is owed, and the computer more wisdom than is due” (2). This effect has become particularly potent as the metaphor (in the form of the computer is a brain) has been popularized among nonexperts through the rhetoric of the tech industry (10) in a way that encourages an inappropriately high level of trust in technologies labeled as “AI” (6). In light of this potential to mislead, Baria and Cross advocate for a “new lexicon” of AI-labeled technologies that problematizes the anthropomorphizing thrust of some current discourse (8).

I am unaware of any previous work explicitly examining the computational metaphor related to libraries or information literacy. Wilkinson (2023), however, has written about the presence of a different conceptual metaphor in information literacy, namely, scholarship as conversation. In an argument that parallels Baria and Cross’s stance on the computational metaphor, Wilkinson argues that scholarship as conversation, which is enshrined in the Association of College and Research Libraries’ (ACRL) Framework for Information Literacy for Higher Education, is sometimes useful but may mislead novices, who may be inclined to take the metaphor too literally. Since previous research suggests “the choice of even a single word can have measurable influence on how people approach information gathering and problem solving,” Wilkinson proposes that librarians seek to complement the conversation metaphor with language that models scholarship as a collaborative effort toward a goal—namely, greater understanding of some subject of inquiry (473). In other words, Wilkinson is advocating a new (or modified) lexicon of information literacy, which would presumably be borne out in the language librarians use when communicating with students and other patrons. I suggest that librarians, bearing in mind the ramifications of the computational metaphor and its popularization by the tech industry, can likewise take it upon themselves “to evaluate the language [we] use to describe brains, technology, and society” (Baria and Cross 2021, 9).

By problematizing anthropomorphizing language around AI-labeled technology, librarians can resist the dehumanizing tendencies that Bender (2024) identifies and avoid contributing to excessive hype around these tools. This hype, Bender and Koller (2020) argue, is fueled partly by publications that use words such as “understand” and “comprehend” to describe—misleadingly, from their point of view—the mechanisms behind large language models (LLMs). Mirza and Seale (2017) address the issue of technology hype as it surfaces in libraries, particularly in the form of a technology solutionism that stakes the continuing relevance of libraries on the swift adaptation to and adoption of technological trends.

One problem with the push to elevate the value of libraries by reorienting them around the technocratic ideologies of Silicon Valley is that those ideologies are lucrative by their association with white masculinity. Much discourse around the future of libraries centers around library administrators and information technologists, sidelining the work of other librarians and library workers who are more likely to be white women or people of color. In a plenary talk that expanded on some of these ideas, Seale (2024) suggested that library instructors, so far as it depends on them, might consider declining participation in the “technology hype cycle” by saying no to teaching AI. In situations where an unqualified “no” is not possible or desirable, I suggest that library instructors can still neutralize some hype by using language that avoids or complicates the computational metaphor.

I now want to turn to virtue information literacy (VIL), an approach developed by Bivens-Tatum (2022). With VIL, becoming an information-literate person is associated with developing various intellectual virtues, which leads, Bivens-Tatum asserts, to human flourishing (I might emphasize that flourishing can be collective as well as individual). Several intellectual virtues Bivens-Tatum discusses have the potential to inform a wise reaction to these technologies and their dehumanizing effects, particularly information vigilance and epistemic justice. Bivens-Tatum defines information vigilance as “mindfulness of information, meaning that one attends critically and thoughtfully to all the information one consumes, keeping watch for bad or harmful information” (5). Since the amount of available information far exceeds any individual’s ability to absorb it, information vigilance implies careful decision-making about which information sources to spend time on and which to set aside. Information vigilance can lead students to consider how much time they want to spend engaging with information generated by LLMs and other computational models. In an interview for the language science podcast Because Language, Bender (2023) commented on her policy of declining to engage with the outputs of LLMs: “I don’t waste my time with synthetic text because I have lots and lots of text from real people that I need to read in my life…people whose opinions I care about. Thus, why would I bother with synthetic text extruded from GPT-whatever?” While I do not assert that all information-vigilant people will necessarily arrive at the same decision as Bender does, I do think her thought process here demonstrates information vigilance.

The virtue of epistemic justice, as introduced by Fricker (2007), acts as a counterweight to various forms of epistemic injustice—primarily testimonial injustice, which occurs when a listener, due to prejudice, accords a disproportionately low level of credibility to a speaker’s assertions. In cultivating a more robust “testimonial sensibility,” a person learns to notice the effects of testimonial injustice and correct them. Xu (2024) presents two case studies involving epistemic injustice in the information literacy classroom. AI-labeled technologies implicate epistemic injustice in several different ways. For example, Baria and Cross (2021) observe that since dominant ideologies equate intelligence with rationality, the mathematical reasoning of AI-labeled technologies is widely seen as more trustworthy than human reasoning, particularly the reasoning of people who may be viewed as less rational because of some aspect of their identities (6). This results in an environment where the testimony of a person may be devalued relative to that of a computer, and this devaluation is more likely if the person belongs to a minoritized group. A learner acquiring the virtue of epistemic justice would, at a minimum, be concerned about this situation.

In the context of a conversation about dehumanization, what I most appreciate about VIL is that it views the learner as a whole person who brings their whole self to the learning environment. They must bring their whole self since, in VIL, information literacy is “a way of life” (Bivens-Tatum 2022, 208). I acknowledge that any attempts to implement VIL must be tempered with humility and, significantly, critical pedagogy; otherwise, library instructors risk ousting tech solutionism only to bring back the equally racialized, virtue-inculcating archetype of Lady Bountiful (Schlesselman-Tarango 2016; Mirza and Seale 2017). Still, when combined with learner-centered pedagogies and pedagogies of care, I posit that VIL has the potential to humanize both students and instructors. I suspect that would be a salutary outcome even for those who are not convinced of a direct or necessary link between AI-labeled technologies and dehumanization.

References

Baria, Alexis T., and Keith Cross. “The Brain Is a Computer Is a Brain: Neuroscience’s Internal Debate about the Social Significance of the Computational Metaphor.” Preprint, submitted July 18, 2021. https://doi.org/10.48550/arXiv.2107.14042.

Bender, Emily M. “79: A.I. Hype Hosedown (with Emily Bender and Jack Hessel).” Because Language (podcast). Daniel Midgley, host. July 26, 2023. https://becauselanguage.com/79-a-i-hype-hosedown/.

Bender, Emily M. 2024. “Resisting Dehumanization in the Age of ‘AI. ” Current Directions in Psychological Science 33, no. 2: 114–20. https://doi.org/10.1177/09637214231217286.

Bender, Emily M., and Alexander Koller. 2020. “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: 5185–98. https://doi.org/10.18653/v1/2020.acl-main.463.

Bivens-Tatum, Wayne. 2022. Virtue Information Literacy: Flourishing in an Age of Information Anarchy. Sacramento, CA: Library Juice Press.

Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.

Lakoff, George, and Mark Johnson. 1980. Metaphors We Live By. Chicago: University of Chicago Press.

Mirza, Rafia, and Maura Seale. 2017. “Who Killed the World? White Masculinity and the Technocratic Library of the Future.” In Topographies of Whiteness: Mapping Whiteness in Library and Information Science: 171–197. https://mauraseale.org/wp-content/uploads/2016/03/Mirza-Seale-Technocratic-Library.pdf.

Schlesselman-Tarango, Gina. 2016. “The Legacy of Lady Bountiful: White Women in the Library.” Library Trends 64, no. 4: 667–686. https://doi.org/10.1353/lib.2016.0015.

Seale, Maura. “Critical Library Instruction and the Question of Labor.” Plenary talk at LOEX 52nd National Conference, Naperville, IL, May 2024.

Wilkinson, Lane. 2023. “Conceptual Metaphors in Information Literacy: Reframing the Scholarly Conversation as Scholarly Collaboration.” Library Quarterly 93, no. 4: 455–478. https://doi.org/10.1086/726319.

Xu, Lijuan. 2024. “Information Literacy through the Lens of Epistemic Justice: Centering the Missing and Unheard Voices of Marginalized Groups.” New Review of Academic Librarianship. Published online, January 22, 2024. https://doi.org/10.1080/13614533.2024.2306366.