A Word from the Editor

If you’ve taken a course or a workshop on copyright since 2011, you’ve heard about the monkey selfie. Nature photographer David Slater set up his camera on a trip to Indonesia and let the macaques photograph themselves. The photos were licensed to media outlets, and then some industrious Wikipedian considered that since they were taken by a non-human animal, there was no human copyright holder, and therefore the images belonged in the public domain. They were uploaded to Wikimedia Commons, beginning a years-long legal dispute that continues to serve as a fertile thought experiment. What qualifies as authorship? Who (or what) qualifies as an author?

These questions are becoming particularly relevant with the ascendance of generative AI and large language models (LLMs), which are increasingly being used to create content in the scholarly realm. When a scholar sets up her metaphorical camera, prompting a LLM to metaphorically take a photograph, is she still the “author”? AI-generated or AI-assisted scholarship is giving us epistemological and existential pause, and it’s also causing practical issues in librarianship. How do we catalog an AI “author”? How do we think about scholarship as a conversation when some of the interlocutors are machines? TCB’s most recent issue (vol. 34, no. 1) tackles the use of AI in technical services, and TL wanted to explore these concerns as well, specifically when it comes to AI authorship.

The Committee on Publication Ethics, as well as other publishing collectives, have made statements asserting that AI cannot be listed as the author of a paper–an issue this very publication grappled with in 2024 when it experimented with publishing an essay written by ClaudeAI. Atla Open Press’s AI Policy states that “AI will not be considered an author for Atla Open Press publications. AI tools may not be listed as an author on any scholarly work published by Atla” (Theological Librarianship n.d.). The question of human accountability seems to be at the center of whether AI can or should be considered an author. If a self-driving car strikes a pedestrian, someone (in this case, the corporate “person” of the manufacturer) must be liable. Similarly, there has to be someone who answers for the work of a LLM, particularly if its output is harmful or incorrect. Christopher Crawford and Thomas Phillips share how the role of scholars “supervising” LLMs can serve as a guardrail in the creation of AI-generated textbooks, making theological education more accessible financially and linguistically across the globe.

We have two peer-reviewed articles: one by Kevin Smith discussing issues of copyright and AI authorship, including extensive analysis of current relevant legal proceedings; and one by Helen Shin, Douglas Fisher, and Clifford Anderson that offers the aspirational concept of the “superscholar”: a hybrid of generative and deliberative AI that can more closely adhere to citation norms and solve the problem of the “responsibility gap” while it inspires deeper human thoughtfulness and questioning.

Jennifer Woodruff Tait’s essay clearly and succinctly explains how LLMs work and why their predictive text output can’t be compared to the human processes of thinking. Greg Rosauer’s essay cautions against engaging with AI in a self-surrogate relation, arguing that offloading our existential presence of authorship to a non-human and yet personified technology leads us away from writing as a “focal practice” that requires attention, skill, and sociality.

One interesting point in Brady Lund’s essay that I had not previously considered is that because so much theological material is in the public domain, there may well be more theology than other subjects in LLM training data. What might this mean for the theological underpinnings of LLM-generated outputs? Shin et al. also consider the theological inputs of LLMs, highlighting that such training material is primarily Western and Anglophone, perpetuating the marginalization of other voices.

As always, this issue also has several critical reviews of recently published books, including the topical Artificial Intelligence for Academic Libraries by Clifford B. Anderson and Douglas H. Fisher and Generative AI and Libraries by Chris Rosser and Michael Hanegan.

Overall, I’m so pleased with the breadth of AI-authorship-related topics considered by our authors and the depth of critical thought they showed in considering them. The Editorial Board hopes that this issue of TL can serve as a starting point for meaningful and generative conversation (pun intended), not only about AI’s use in scholarship and theological libraries, but also about the character and texture of our roles as scholars, librarians, and humans pursuing academic activities.

Thanks for being here, and do let us know what you think.

Keegan Osinski

Editor-in-Chief, Theological Librarianship

References

Theological Librarianship. n.d. “About the Journal.” Accessed April 1, 2026. https://serials.atla.com/theolib/about#ai-policy.