Listen and Learn Sessions

AI in the Seminary Classroom

Equipping Faculty to Address the Pedagogical, Moral, and Ethical Aspects of AI Use for Class Assignments

Abstract: Our session originated from our experience in responding to faculty questions about generative artificial intelligence and its use in class assignments. We addressed this issue with a faculty development session and the creation of a LibGuide in which we answered basic questions about AI mechanics, the need for an AI plagiarism policy, ideas for assignments less susceptible to AI use, and ideas for introducing AI to students. As we prepared resources for our faculty, we learned that AI use for class assignments presents challenges far beyond the typical concerns about plagiarism detection, including issues with pedagogical, moral, and ethical implications, and became increasingly convinced that AI education is not only advisable but necessary in the seminary classroom. We also learned that our faculty, like university faculty across the country, were hesitant to address this challenging topic. Librarians, who regularly adapt to changing technology, can take the lead and support faculty in navigating the many issues arising from AI use. We identify three areas of collaboration: developing AI policies, teaching about the moral and ethical concerns regarding AI use, and crafting assignments.

Introduction

Having received multiple questions from faculty regarding generative AI1 plagiarism, Sacred Heart Seminary and School of Theology librarians tackled the formidable task of AI education as librarians do, with research and a LibGuide. We hosted a faculty development session in January 2024 that focused on their plagiarism concerns, briefly discussing ethics, AI policies, and assignment design. Although our session and LibGuide sparked interest among our faculty, they did not follow up on our suggestions for policies or teaching. Our library staff concluded that, like information literacy one-shots, the one-time session and LibGuide did not impact behavior. We determined that a better approach would be to work with individuals and give them more specific ideas for addressing this issue.

Research shows that our faculty’s reaction mirrors that of other university faculty nationwide. AI is a big undertaking without much institutional guidance or support, like taking the proverbial first bite of an elephant. A survey of faculty from Northeastern University found that most faculty had not used it in teaching, even though a majority thought AI and digital literacy were important to their students’ success (Szeleny 2024). A survey of Metropolitan State University of Denver faculty members had similar findings (Jay 2024). Seventy-eight percent of faculty in that study said unfamiliarity with AI was the primary reason they did not use it. We found no studies of theology faculty specifically. But, given that seminarians do not compete in the marketplace, we can assume that there may be less concern among seminary faculty about their students being AI literate.

Given the general hesitancy to educate about generative AI and perhaps a heightened reluctance among seminary faculty, librarians have an ideal opportunity to take the lead on this issue and collaborate with faculty on their approach to it. We should be committed to educating ourselves and faculty about generative AI because of its importance in theological discussions. With the phenomenon of generative AI sweeping every element of society and its widespread use in everything from travel planning to personal counseling, AI education is an integral part of seminary formation. Our students will use generative AI, if not for class assignments, then for personal use. More importantly, the congregations they serve will use it. If seminarians are not taught about AI fundamentals and the practical, moral, and ethical implications of its use, they will not be able to lead their congregations through issues arising from its use or contribute to the essential religious and societal conversations surrounding this technology.

Need for AI Education

AI literacy is generally defined as the knowledge and ability to use, understand, and interact with AI technologies. This includes more than prompt engineering; it also includes understanding the technologies behind generative AI, applying AI concepts in different contexts and applications, and considering the ethical implications of its use (Ng et al. 2021). Effective AI use still requires the development of human abilities to evaluate, analyze, and adapt (Bowen and Watson 2024, 38). Moreover, there are human skills in which AI is deficient, especially interpreting or applying information to new situations and contexts (see the “sippy cup” example in our LibGuide), asking questions, and predicting future results (Bowen and Watson 2024, 38–40). Human creativity and persistence are still needed to prompt information from generative AI. Finally, if there is any doubt that generative AI should be a part of a seminary education, just ask AI!2

AI literacy starts with learning to evaluate various AI applications according to the same criteria used for evaluating other information sources. What is the source behind the technology? Why was the technology developed? Who trained it? On what documents was it trained? Is the purpose commercial or educational? In particular, seminary students should be familiar with various forms of so-called religious AI. See, for example, Christian AI, an ad-based application that describes its training documents as “a vast dataset of Christian literature, biblical texts, and religious writings” (Christian AI, 2024). Chatbots, like websites, have varying degrees of reliability, and students must be trained to recognize the differences.

Possible Collaboration: Developing AI Policies

While the need for AI education in the seminary is greater than the mere use of generative AI for class assignments, specifying how seminarians can use AI in the classroom is still essential. Librarians can support faculty by helping them develop policies that provide students with the necessary guidance in using generative AI. At a minimum, students must know:

  1. What AI use is permitted: brainstorming, outlines, research, review of an assignment draft, or other use?
  2. What AI technology can be used: chatbots trained on the Internet, chatbots trained on religious texts, Microsoft Editor, Grammarly, or others?
  3. What specific AI use is to be ethically acknowledged, and what is the form of acknowledgment?
  4. How can AI enhance, but not impede or replace, individual learning, research, and writing processes?3

AI policies are a hot topic of conversation in higher education. See, for example, “Syllabi Polices for Generative AI” (Eaton 2023), a collaborative, crowd-sourced Google Doc of over 86 policies from colleges and universities worldwide, although primarily from the United States. Most of these policies are course-specific, a practice Bowen and Watson recommend in Teaching with AI: A Practical Guide to a New Era of Human Learning (2024, 134). The authors further suggest that co-writing the policy with the class creates a perfect time for a robust discussion of AI and why a policy is needed (2024, 132–134). While those in academia respect scholarship and intellectual property, students might not appreciate the importance of knowing where the source of generated information comes from. Such a discussion can reinforce teaching about knowledge creation over time, crediting information sources, and the student’s responsibility as a knowledge creator.4 Given AI’s propensity for generating misinformation and biased statements, students need to assume responsibility for the content they use. Other discussion topics could include how AI interferes with thinking and writing processes, how over-reliance on AI can impede spiritual reflection, how AI may not be trained in religious tradition and teaching, and how AI cannot discern human relationships and personal experiences, all of which are integral aspects of theological education.5

Possible Collaboration: Teaching About Ethical Issues

Among the many ethical issues associated with the (mis)use of artificial intelligence are:

Bias. The data sets used to train AI reflect the biases and prejudices of society, are crowdsourced, and can magnify the issues of sexism, religion, racism, gender, etc. For example, when Microsoft used Twitter to train its chatbot Tay on how and what to tweet, it spewed racist and misogynist tweets (Gaudet 2022). When there is an attempt at editing the content, human biases are still embedded.

Environment. The need for rare earth minerals to manufacture electronic components, such as lithium cobalt, creates environmental disasters in those, often developing, countries where they are mined. Data centers require huge amounts of water and electricity for cooling, lighting and other systems. Carbon emissions, water pollution, and poor resource management in construction are other high-impact consequences.

Privacy. AI systems are trained on vast amounts of internet data which includes personal information that has been collected without users’ knowledge or consent and ignoring privacy rights. Companies create detailed user profiles based on this information to create targeted ads, political manipulation, or other possibly nefarious purposes, or to gain competitive advantages by influencing user behavior. However, the algorithms are “black boxes,” making it hard to figure out how the system makes decisions and raising questions about transparency.

Intellectual Property and Copyright. In December of 2023, the New York Times brought a lawsuit against OpenAI and Microsoft for unauthorized use of their copyrighted stories to train the chatbots that are now their competitors. The idea that chatbots will “generate” or “create” news is a disquieting story. There is a similar issue with image generation. To train an AI image generator, they must use millions or billions of images, scraped from the internet without the consent of the creator. Often there is no context for the images.

Exploitation of Human Labor. Low-paid workers are often used for categorizing and labelling data, resulting in workers being treated as parts of a machine, rather than as individuals. Entry level jobs are in danger of being eliminated, thus leading to a cycle of poverty and unemployability. Content moderators, used to “clean up” some of the data being used to train these systems, are often continuously exposed to traumatic text and images without adequate support systems. One example from Time magazine discusses workers in Kenya, paid between $1.32 and $2.00/hour, to scrub disturbing content from data: “Around three dozen workers were split into three teams, one focusing on each subject. Three employees told Time they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All the four employees interviewed by Time described being mentally scarred by the work” (Perrigo 2023).

Power Issues. Most of the text is in English and is further biased by the way the data is collected and absorbed into the models. Algorithms are created by those with a bias towards certain presuppositions. Image collection may not account for variations in facial structure, skin tone, cultural background, etc. These factors contribute to perpetuating power imbalances in the world.

Misinformation or Errors. We are all familiar with, or have seen examples of, AI “hallucinations,” situations in which the chatbot makes up sources or gives obviously incorrect information—such as including at least one small stone a day as part of a healthy diet (Klebenov 2024). However, a more important threat is the creation of fake news, impersonation of real individuals or organizations, and flooding the internet with false information. This leads to the prospect of highly persuasive campaigns by those seeking to influence public opinion. Russian influence in an American election, anyone?

In one example, programmers training a “smart weapons” system used photos of tanks in the sunshine to train the system to identify enemy tanks. However, when shown a photo of a tank on a rainy day, the AI was unable to identify the tank because it associated tanks with sunshine. This is but one example of why Pope Francis, in a written address to participants in an AI ethics conference in Hiroshima, has asked people to push for a ban on autonomous weapons, starting “from an effective and concrete commitment to introduce ever greater and proper human control.... No machine should ever choose to take the life of a human being” (McLellan 2024).

Influence of Faith. Incorrect use of AI chatbots is a temptation to outsource one’s thinking. The generated output of AI is merely a statistical prediction of word patterns, cut off from any capacity to discern truth or make moral decisions.

Humanity is created in the image of God. According to Sean McGever, there are three models which apply to education: the structural model, in which “the rational capacity of the mind and the volitional capacity of the will … bring to light the knowledge and righteousness of God,” and which sets us apart from the rest of creation; the relational model, which mirrors the trinitarian aspect of God, and which gives us the ability to respond to and relate to God; and the functional model, which relates to the tasks which God has given us to accomplish in the world (McGever 2023).

True education is a form of encounter with the other which requires active participation, embodied presence, and emotional engagement with a text or person. The underlying framework for education has two parts: solidarity and subsidiarity. Solidarity is not just an emotion, but a real relationship coming from interpersonal actions, witnessing concrete care for the common good. Subsidiarity is the principle by which the local informs the actions of the global for the common good—of handling issues locally before “kicking them upstairs.”

There seem to be two major philosophies of education: personalism and behaviorism. The latter is based on B. F. Skinner’s work on the study of observable behavior. The basic principle is that behavior is a function of one’s environment; that learning occurs through conditioning. It is a mechanistic approach with little concern for the student’s interior life. Educations hands out dopamine hits through A’s.

On the contrary, personalism is concerned with one’s interior life and virtue. In his work Person and Act, Karol Wojtyla (the future Pope John Paul II) proposed the personalistic norm: “This norm, in its negative aspect, states that the person is the kind of good which does not admit of use and cannot be treated as an object of use and as such the means to an end. In its positive form, the personalistic norm confirms this: the person is a good towards which the only proper and adequate attitude is love” (John Paul II 1993, 41). Personalism also implies inter-personalism, as Benedict XVI stresses in Caritas in Veritate: “As a spiritual being, the human creature is defined through interpersonal relations. The more authentically he or she lives these relations, the more his or her own personal identity matures. It is not by isolation that man establishes his worth, but by placing himself in relation with others and with God” (Benedict XVI 2009, §53).

Possible Collaboration: Creating Assignments

Perhaps the most fruitful area of collaboration is crafting assignments less susceptible to AI plagiarism, given faculty concerns regarding this issue. As literacy advocates who understand the importance of developing strong research and writing skills, librarians should help faculty resist the temptation to revert to oral or in-class assessments to prevent AI use. Instead, librarians can suggest alternative writing assignments that decrease the risk of AI plagiarism. AI is less skillful with application, analysis, and problem-solving tasks, so assignments should draw on these skills. This would include assignments that relate to life experiences, apply learning to a local problem or context, discuss an ethical dilemma, or describe varying points of view on an issue (Bowen and Watson 2024, 201–207). Similarly, best practices for writing instruction, consisting of low-stakes prewriting assignments, outlining, drafting, peer review, and revision, discourage reliance on generative AI (Mills, 2023). These assignments emphasize the writing process, which is invaluable as a learning and thinking tool. As stated by Anna Mills, a community college English instructor who writes frequently about AI,

No one creates writing assignments because the artifact of one more student essay will be useful in the world; we assign them because the process itself is valuable. Through writing, students can learn how to clarify their thoughts and find a voice. If they understand the benefits of struggling to put words together, they are more likely not to resort to a text generator. (Mills 2023)

Assignments utilizing AI can both improve student writing and boost AI literacy. For example, students can analyze AI output for biased, inaccurate, or misleading content or supplement the text with explanations and sources, compare the writing between different chatbots or between human and AI writing, role-play with a Chatbot regarding an aspect of ministry, or use AI to edit their writing (Bowen and Watson 2024, 207–217). Given the rapid improvement in generative AI technology, testing your assignments with AI is always advisable. Rather than simply eliciting responses to your assignment, ask questions regarding how the assignment could be improved, such as: How might students use AI on this assignment? How might I make it harder to cheat using AI on this assignment? How might AI undercut the goals of the assignment (Bowen and Watson 2024, 97–98)?

Conclusion

As they have done with information literacy, librarians must do what they can to ensure their students are AI-literate. At its most basic, this literacy includes how to effectively use AI, evaluate AI applications, and consider the ethical implications of its use. Librarians can achieve this goal by providing much-needed assistance to faculty in establishing policies for class use, teaching about the moral and ethical approaches to AI, and crafting assignments that discourage the overuse of generative AI as well as assignments that encourage the development of AI literacy.

LibGuides

Faculty and AI: Atla 2024 slides: https://leodehonlibrary.libguides.com/atla2024facultyAI

AI and the Classroom: https://leodehonlibrary.libguides.com/AI

References

ACRL Board. 2016. Framework for Information Literacy for Higher Education. Association of College and Research Libraries. https://www.ala.org/acrl/standards/ilframework.

AI Research Group for the Centre for Digital Culture of the Dicastery for Culture and Education of the Holy See. 2024. Encountering Artificial Intelligence: Ethical and Anthropological Investigations. Edited by Matthew Gaudet, Noreen Herzfeld, Paul Scherz, and Jordan Joseph Wales. Eugene: Pickwick Publications.

Benedict XVI. 2009. Caritas in Veritate. Encyclical Letter. Vatican City: Libreria Editrice Vaticana. https://www.vatican.va/content/benedict-xvi/en/encyclicals/documents/hf_ben-xvi_enc_20090629_caritas-in-veritate.html.

Bowen, José Antonio, and C. Edward Watson. 2024. Teaching with AI. Baltimore: Johns Hopkins University Press.

Christian AI. 2024. “FAQ.” https://www.christianai.app/!/faq.

Eaton, Lance. 2023. “Syllabi Polices for Generative AI.” https://bit.ly/AI-Syllabi. Last updated September 15, 2023.

Furze, Leon. 2023. “Teaching AI Ethics: Bias and Discrimination.” Leon Furze (blog), March 16. https://leonfurze.com/2023/03/06/teaching-ai-ethics-bias-and-discrimination/.

Furze, Leon. 2024. Practical AI Strategies. Baltimore: Amba Press.

Gaudet, Matthew. 2022. “An Introduction to the Ethics of Artificial Intelligence.” Journal of Moral Theology 11 (1). https://jmt.scholasticahq.com/article/34121-an-introduction-to-the-ethics-of-artificial-intelligence.

Jay, Sam. 2024. “Survey Highlights Faculty Perception of Generative AI.” Metropolitan State University of Denver, February 20. https://www.msudenver.edu/early-bird/survey-highlights-faculty-perception-of-generative-ai/.

John Paul II. 1993. Love and Responsibility. San Francisco: Ignatius Press.

Klebanov, Sam. 2024. “Google’s AI Says You Should Eat Pebbles.” Morning Brew, May 24. https://www.morningbrew.com/daily/stories/2024/05/24/google-s-ai-says-you-should-eat-pebbles.

McLellan, Justin. 2024. “Pope Asks World’s Religions to Push for Ethical AI Development,” Catholic News Service, July 10. https://www.usccb.org/news/2024/pope-asks-worlds-religions-push-ethical-ai-development.

McGever, Sean. 2023. “Theologically Informed Pedagogical Solutions to the ChatGPT ‘Problem’.” Didaktikos (blog), April 3. https://www.logos.com/grow/didaktikos-chatgpt-seminary/.

Mills, Anna R., and Lauren M.E. Goodlad. 2023. “Adapting College Writing for the Age of Large Language Models Such as ChatGPT: Some Next Steps for Educators.” Critical AI (blog), April 17. https://criticalai.org/2023/01/17/critical-ai-adapting-college-writing-for-the-age-of-large-language-models-such-as-chatgpt-some-next-steps-for-educators/.

Mills, Anna. 2022. “AI Text Generators and Teaching Writing: Starting Points for Inquiry.” WAC Clearinghouse. Last Updated November 18, 2023. https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/.

Mills, Anna R. 2023. “ChatGPT Just Got Better. What Does That Mean for Our Writing Assignments?” Advice (blog), The Chronicle of Higher Education, March 23. https://www.chronicle.com/article/chatgpt-just-got-better-what-does-that-mean-for-our-writing-assignments.

MLA-CCCC Joint Task Force on Writing and AI. 2023. “Academic Integrity and Assignment Design.” September 23. https://aiandwriting.hcommons.org/2023/09/22/academic-integrity-and-assignment-design/.

Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu and Maggie Shen Qiao. 2021. “Conceptualizing AI Literacy: An Exploratory Review.” Computers and Education: Artificial Intelligence 2. Accessed July 11, 2024. https://doi.org/10.1016/j.caeai.2021.100041

Perrigo, Billy. 2023. “OpenAI Used Kenyan Workers on Less Than $2.00 per Hour to Make ChatGPT Less Toxic.” Time, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/.

Szelenyi, Balasz. 2024. “Adopt or Avoid: Faculty Dilemmas and Decisions on Generative AI in Teaching and Research.” Faculty report, Northeastern University. https://faculty.northeastern.edu/senate/wp-content/uploads/sites/2/2024/03/Addendum-4-Survey-Results-Adopt-or-Avoid-Northeastern-Faculty-views-on-using-generative-AI.pdf

Endnotes

  1. 1 Generative Artificial Intelligence (GAI) is a subset of artificial intelligence that can generate various forms of content, e.g., text, images, or audio. This essay uses the term AI to refer specifically to generative AI.

  2. 2 See transcripts of responses from various chatbots in response to the prompt “Identify all reasons why seminarians and future priests need to know about generative AI. List the five most important reasons.”: https://leodehonlibrary.libguides.com/c.php?g=1403071&p=10387307

  3. 3 See also “Faculty and AI—Atla 2024” LibGuide for other drafting tips: https://leodehonlibrary.libguides.com/c.php?g=1403071&p=10388394

  4. 4 See also the ACRL Framework for Information Literacy for Higher Education, “Scholarship as Conversation” (ACRL Board 2016).

  5. 5 See transcripts of responses from various chatbots to the prompt “Act as a Roman Catholic seminary professor. Describe to seminary students why AI should not be used in seminary assignments”: https://leodehonlibrary.libguides.com/AI/chat.