AI Authorship and the Role of the Librarian
Abstract: Debates about generative artificial intelligence (AI) authorship have intensified within higher education and among academic publishers, with divisive opinions about responsible collaboration versus prohibition of AI use. Within the field of library and information science, these debates are especially complex, due to the field’s highly interdisciplinary nature, which brings together contributors from the humanities, social and natural sciences, and dual commitments to professional practice and technical innovation. This essay examines the tensions surrounding AI-assisted writing and argues for the need to move beyond the simplistic accept-prohibit dichotomy. Drawing on the previously introduced concept of bothorship, the paper proposes a model of responsible AI authorship that preserves human intellectual ownership while acknowledging the practical and equity-related benefits of AI-supported writing. Attention is given to the role of libraries and librarians—including theological libraries—in advancing AI literacy, supporting ethical writing practices, and mitigating structural inequities in scholarly communication.
Library and information science truly must be one of the most peculiar academic disciplines. It straddles the line between the humanities/social sciences and service-oriented areas of librarianship and the technical side of information science. There is certainly a lot of blending and overlap between these two sides, but there are also some points of substantial distinction not only between the two areas themselves but also among the students who are interested in them. One of the starkest differences is in attitudes toward artificial intelligence (AI) writing. My library science students at the University of North Texas almost unilaterally avoid any AI use. They seem to be reluctant even to explore the technology and hyperaware of potential risks and abuse. On the other hand, my information science students embrace the technology, and many engage in AI writing practices to the point of overreliance.
I bring up this distinction between the library and the information science sides of our discipline to highlight the fact that views toward AI authorship—its permissibility, fairness, and accuracy in performing writing tasks—varies widely even within a single university classroom. In the above situation, I would argue that there are no entirely “right” or “wrong” parties. Certainly, overreliance on and abuse of large language models (LLMs) in writing is bad, but is avoiding all use of AI that much better? Consider the disadvantage that a future librarian might face in the world five years from now, when LLMs are the most prominent information sources that the public uses for any information need. If that librarian has no experience working with LLMs, they will be ineffective in their job. AI use is unavoidable. We must learn to use it, but also to use it responsibly, creating a fine line that we must learn to navigate in our work.
In this essay, I would like to discuss some of the nuances I perceive when it comes to AI authorship—what I termed in one paper “bothorship” (bot authorship) (Lund 2025). I hope to elucidate why I believe it is better to teach students to write with AI than to avoid it. I will also endeavor to set a course for advocating for responsible AI authorship among library professionals.
What Does Responsible AI Authorship Look Like?
I believe that ethical AI authorship starts with authors drafting what they want to communicate. This initial step ensures that the ideas in the writing are their own. Every author should be responsible for the thoughts they convey, the references they use, and the structure of their work. Large language models, however, can help improve grammatical clarity and arrangement of content and can suggest edits and additions for the author to evaluate. If this is done properly, disclosure of AI use should not be necessary. After all, we are not asked to disclose the use of spell checkers and other features that are built directly into Microsoft Word. The problem comes when authors rely on these models to develop ideas for them, an act that raises concerns about misinformation and hallucinations (Garry et al. 2024).
This approach to AI authorship can serve as an equalizer for disadvantaged populations in our landscape of scholarship, like non-native English speakers (Mannuru et al. 2023). The vast majority of top scholarly publications worldwide are published in the English language. It is the lingua franca of research, yet less than 5% of the world’s population are native English speakers. To the extent that generative AI tools can assist with translation and refining writing clarity in order to make promising manuscripts more competitive for acceptance in top journals, AI-assisted authorship can help reverse biases in the creation of knowledge that have existed since the inception of the written word.
Some important caveats provide boundaries for this approach to AI authorship:
- Instructions should be read and then prompts developed in the author’s own words. If I give an assignment to my class, students should not just copy and paste my instructions into ChatGPT. They need to apply their own critical thinking skills to determine exactly how to prompt the model to get what they want out of their project. Similarly, researchers should not copy and paste a list of ideas or topics provided from a journal call to develop the idea for their work. The ability to generate novel ideas is uniquely human, and we should endeavor to keep it that way.
- Authors should draft their own content. LLMs can be great for improving writing clarity and suggesting avenues for expanding work, but they should not be doing the work of drafting a paper from scratch. When authors draft their own content, even a “rough draft” or outline, providing this foundational information helps improve the accuracy of the model’s output and gives the LLM a picture of the author’s writing style. This helps reduce errors and avoid overly generic AI outputs with a bland, 19th-century-esque writing style.
- Authors are responsible for all content in their paper. If an LLM produces misinformation that is not corrected by the author, then the inaccuracy is as much the author’s “fault” as if the author themselves had created it. If an author is thoroughly reviewing what a model produces and has expertise in an area that they claim, they should be able to identify dubious information and double-check and correct false claims. If this is not done, then it suggests that the author either 1) did not review the output provided by the model or 2) does not have the necessary expertise and thus should not be writing in that area.
- No references should be generated with AI. If the opportunity is likely to arise for an LLM to recommend that a reference be added within the material it produces, it should be prompted, “mark where citations should be added by including a placeholder (CITATION).” The author should be responsible for identifying relevant works from library databases or elsewhere in the appropriate literature. Large language models do not “source” information from a single source in the way that humans do, which leads to hallucinated references and misinformation. When human authors take responsibility for identifying relevant resources, they ensure that the information in the paper is accurate (i.e., that the LLM has not hallucinated the claims themselves).
If these guidelines are followed carefully, there should be no way for me, as the reader of the paper, to definitively know that AI has been used in the writing. As an evaluator, my objective then concerns whether the content of the paper is accurate and meaningful, rather than whether AI has been used (which, unfortunately, has become the case with peer-reviewing, where I have had multiple submissions in the past year submitted with fictitious, hallucinated references).
Why Libraries and Librarians Have a Role in Responsible AI Authorship
Prompting LLMs effectively and working with AI ethically requires both expertise and nuance, which is why librarians need to be involved in guidance regarding their use. As these models evolve and improve in their abilities to understand human commands, the need for “prompt engineering” may diminish somewhat, but there will always be some benefit to being able to articulate a need as clearly and accurately as possible. Librarians can serve as AI boundary-setters, interpreting abstract policy about AI use (such as university academic integrity policies) into clear, practical guidance on when student and faculty writers can use AI in their work. To do so, the library may work directly with the departments that own the academic integrity and AI policies to translate the policy into guidelines and then integrate discussion of these guidelines into existing instructional sessions.
Librarians are experts in information literacy. In an era of AI authorship, they must also be experts in AI literacy, educating patrons about the limitations of these models (LaFlamme 2025). LLMs do not “understand” information, though they convey it to people as though they do. When seeking information, authors should include LLMs as only one source of many consulted. When writing with AI, everything that a model generates must be scrutinized. It is easy to fall into a trap whereby all aspects of a writing task—from topic development to research to drafting the paper—are performed entirely within the ecosystem of a single large language model. This approach introduces a high probability of bias and inaccuracy. AI literacy instruction is a guard rail against this and other misuses of AI models.
Just as they help patrons refine search results during research consultations, librarians should also be prepared to instruct patrons on how to optimize the AI outputs they receive. Clear and detailed instructions for the model are important, as are revisions and clarifications to prompts until the model’s outputs satisfactorily serve their intended purpose. The approach to searching (natural language) is distinct from the keyword searching of the past, but the aim to optimize outputs from an information system is not. Librarians can leverage their existing skillset while addressing changes in information interfacing without having to learn an entirely new domain.
AI Authorship and Theological Librarianship
Theological libraries serve specialized and important roles in their communities, supporting theological research and knowledge creation. AI authorship could certainly have a significant role in this environment, perhaps more outsized than in the average field. Many religious texts—such as bibles and other historical documents—fall outside of copyright and were likely included in the training dataset for many LLMs. Models’ content regarding theological topics may therefore be more extensive than their content in other areas, and the citations may be more accurate than those generated for other subject matter.
Depending on the type of theological library, librarians may have an opportunity to educate patrons about AI itself in relation to religious belief (Lund and Teel 2024). Religious belief sometimes encounters tension with the emergence of new technologies, especially those that proport to emulate human intelligence and personality. LLMs’ conversational style encourages anthropomorphism, which can result in the misattribution of beliefs, moral frameworks, and motivations to AI platforms and can lead to hesitation regarding their use. These and other misunderstandings about AI can be clarified by an expert in order to create a more optimistic, but also responsible, environment for AI use.
Patrons of the theological library may find value in interacting with LLMs to identify and compare varying interpretations of religious texts and doctrines. However, when it comes to original theological scholarship, the issues with AI authorship remain fundamentally the same as those identified for other researchers and authors. Society will benefit from having a class of new professionals and scholars who are well-versed in ethical AI use, regardless of their field. Ethical, effective AI use can begin with AI literacy instruction within the university, and theological libraries are no exception.
Conclusion
AI authorship is not a simple matter. One must parse through a layer of ethical ambiguity, where there is no clear threshold between permissible AI use for manuscript enhancement and impermissible use as a writing replacement. Librarians are valuable advocates for responsible authorship practices and also for the use of tools that can help mitigate inequities among scholars. Rather than resisting the inevitable expansion of AI tools in academic writing, the profession must work proactively to guide their ethical use in ways that facilitate scholarly exploration, uphold scholarly integrity, and promote equal access to the scholarly conversation. In promoting a responsible approach to AI-assisted authorship, librarians can both encourage and facilitate the work of academic writers.
References
Garry, Maryanne, Way Ming Chan, Jeffrey Foster, and Linda A. Henkel. 2024. “Large Language Models (LLMs) and the Institutionalization of Misinformation.” Trends in Cognitive Sciences 28 (12): 1078-1088. https://doi.org/10.1016/j.tics.2024.08.007.
LaFlamme, Katherine A. 2025. “Scaffolding AI Literacy: An Instructional Model for Academic Librarianship.” The Journal of Academic Librarianship 51 (3): 103041. https://doi.org/10.1016/j.acalib.2025.103041.
Lund, Brady. 2025. “Bothorship: AI Chatbot Authorship after Two Years.” Library Hi Tech News 42 (2): 6-7. https://doi.org/10.1108/LHTN-06-2024-0098.
Lund, Brady, and Zoë Abbie Teel. 2024. “Fear of AI, Christianity, and the Modern Library Fear of AI, Christianity, and the Modern Library.” The Christian Librarian 67 (1): Article 5. https://doi.org/10.55221/2572-7478.2450.
Mannuru, Nishith Reddy, Sakib Shahriar, Zoë A. Teel, et al. 2024. “Artificial Intelligence in Developing Countries: The Impact of Generative Artificial Intelligence (AI) Technologies for Development.” Information Development 41 (3): 1036-1054. https://doi.org/10.1177/02666669231200628.