To Err Is Human, to Detect Errors Is
Human-in-the-Loop

We have grown accustomed to software helping proof our work—finding spelling mistakes or flagging subject-verb disagreements—but with artificial intelligence (AI) those roles will be reversed, and humans will become the spell checkers tasked with making sure the infinitely more capable AI does not err. It is a bold vision in which humans are transmogrified into squiggly red lines beneath misspelled words. For catalogers, the labor of subject analysis will be ceded to the machine, and we will merely be asked to examine what it has spewed out and either nod with approval or sigh and say, “Another hallucination, dear machine, please try again.” The machine will then generate another attempt, and this one, likely, will get our nodding approval, and the record can move on having been sanctioned by a Human-in-the-Loop.

Of course, we are not quite there yet. For the time being you and I are still just plain old humans, but our hyphenated future as Humans-in-the-Loop is just around the corner, as all the best cataloging minds will tell you. They are busy conducting experiments with AI as we speak (planet Earth be damned, long live Data Centers (Zewe 2025)), and they are all quite confident that with a bit more training, these machines will be ready to do our jobs. Let us turn now to their dystopian conclusions, which will give us a deeper sense of what they envision for our future.

In a 2024 interview with the Library of Congress’s Abigail Potter and Caroline Saccucci, Saccucci comfortingly assures us that humans are still necessary, at least for a while:

Since high quality catalog records are essential to the Library of Congress and libraries around the world who use our MARC records, the results are showing us that catalogers will need to review ML/AI output prior to publishing, which we expected. The cataloging assistance workflow prototypes enabling cataloger review and feedback showed promise, and this human-in-the-loop (HITL) concept is moving forward for further iteration. (Brador 2004)

I do not want to dwell upon what the potential ecological impact of these “further iteration[s]” might be, but it is worth noting that in all the articles I discuss here, no consideration (let alone compunction) is given to the real-world impacts of LLMs or AI in general (United Nations Environment Programme 2024). The necessity of training job- and planet-destroying technology is taken as a neutral inevitability, something akin to waking up and discovering you have a pillow beneath your head.

What is worth reemphasizing, however, is the refrain we hear repeated throughout these essays, which are all saying more or less the same thing: humans remain necessary in this work because catalog records created by AI still need to be reviewed by a trained cataloger, but said cataloger is only necessary in this limited capacity. This, somehow, is meant to be reassuring to us: we are not (yet) wholly replaceable. In fact, we are, for the time being, still necessary. Not as catalogers, but as reviewers of AI cataloging. The machine will do the work; we will merely look it over. Sounds pretty grim, right?

Richard Brzustowicz’s 2023 article, “From ChatGPT to CatGPT: The Implications of Artificial Intelligence on Library Cataloging,” begins with words I imagine he thinks are comforting, but for anyone worried about imminent job loss, this opening will make your blood run cold:

While ChatGPT has the potential to streamline aspects of the cataloging process, it is not a complete replacement for human catalogers. The records generated by ChatGPT can serve as effective starting points, but they often contain discrepancies when compared to professional catalogers’ records. (Brzustowicz 2023, 4-5)

How reassuring to know that ChatGPT is not a “complete replacement” for us! For now, anyway. These technologies are nascent, yet these authors are already envisioning a world in which they can completely replace us. Saying that they are not yet capable of doing so is a cold comfort, and one would have to be disingenuous to not grasp that complete replacement is the end goal here. Perhaps we will look back nostalgically upon the time when we were rendered into Humans-in-the-Loop, for at least then we were not completely replaced.

Brzustowicz is not finished with his vision for our future, however. He goes on to offer all sorts of exhilarating opportunities for Human-in-the-Loop living:

To ensure error-free results, librarians and other information professionals should approach ChatGPT’s application systematically, by monitoring and evaluating the training data used to develop the model’s capabilities and by regularly curating and updating those data. Additionally, periodic inspection and amendment of the generated records may be necessary to avoid inaccuracies and discrepancies arising from biases in the training data. (Brzustowicz 2023, 7)

Not only does he envision a future in which we get to “inspect and amend” the records of ChatGPT, but we also get to “monitor and evaluate” its training data. What a hoot!

Eric H. C. Chow, T. J. Kao, and Xiaoli Li take a more human approach to discussing the impending expendability of catalogers in their 2025 essay, “An Experiment with the Use of ChatGPT for LCSH Subject Assignment on Electronic Theses and Dissertations,” offering these words of reassurance in their abstract:

Nonetheless, human catalogers remain essential for verifying and enhancing the validity, exhaustivity, and specificity of Library of Congress subject headings generated by LLMs. (Chow, Kao, and Li 2024)

Hear that? We are “essential”! And then one reads: “for verifying and enhancing validity.” So here we are again, thrust back into the role of trusty Humans-in-the-Loop – existing not to do any intellectual or creative labor ourselves, but merely to verify and enhance the labor of the LLM or AI or ML or any other acronym that roughly translates to a machine that will do your job.

Instead of researching ways to metamorphose catalogers into Humans-in-the-Loop, should we not stop to ask ourselves if proofreading the work of machines is really the future we want? Does any cataloger want to spend their days spot-checking the subject analysis of AI? And as dystopian as it sounds to become AI’s amanuensis, it is made even more horrifying by the ecological damage being done by these machines. It is not a question of some carbon neutral machine doing the job we have been doing for decades: these are massively destructive energy sucks that threaten not only to take our jobs but also to imperil the planet upon which the Humans-in-the-Loop must continue to live.

Why are we so blithely diving into these experiments that train AI to catalog? Just because we are living through an economic bubble where AI firms baselessly generate preposterously high valuations does not mean that libraries—typically not-for-profit institutions devoted to making access to information free and easy—need to get swept up in the hype. There are no riches in store for us if we adopt this technology. In fact, there is just the opposite: you do not need many Humans-in-the-Loop to keep a library going, so many of us will face the very real threat of job loss. The Library of Congress and other large academic libraries should take more seriously the implications of the AI training they are doing, both for working librarians and the planet on which those librarians live.

While all these researchers are quick to point out that there is a role for humans in the oversight of AI cataloging, they do not take seriously just how dreary and dystopian that “oversight” would be. No cataloger I have ever met wants to become a proofreader for a bot, so assurances that this role will continue to exist (for a time, anyway) is far from inspiring.

There is no need to do this. So why, then, are we doing it? And at what cost?

References

Brador, Isabel. 2024. “Could Artificial Intelligence Help Catalog Thousands of Digital Library Books? An Interview with Abigail Potter and Caroline Saccucci.” The Signal: Digital Happenings at the Library of Congress (blog). https://blogs.loc.gov/thesignal/2024/11/could-artificial-intelligence-help-catalog-thousands-of-digital-library-books-an-interview-with-abigail-potter-and-caroline-saccucci/.

Brzustowicz, Richard. 2023. “From ChatGPT to CatGPT: The Implications of Artificial Intelligence on Library Cataloging.” Information Technology and Libraries 42 (3) (September). https://doi.org/10.5860/ital.v42i3.16295.

Chow, E. H. C., T. J. Kao, and X. Li. 2024. “An Experiment with the Use of ChatGPT for LCSH Subject Assignment on Electronic Theses and Dissertations.” Cataloging & Classification Quarterly 62 (5). https://doi.org/10.1080/01639374.2024.2394516.

United Nations Environment Programme. 2024. “AI Has an Environmental Problem. Here’s What the World Can Do About That.” UNEP News & Stories, September 21, 2024. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about.

Zewe, Adam. 2025. “Explained: Generative AI’s Environmental Impact.” MIT News, Massachusetts Institute of Technology, January 17, 2025. Accessed November 13, 2025. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.