A Typology of Human-Technology Relations

Abstract: This essay offers a phenomenological typology of human–technology relations in response to the emergence of generative and agentic artificial intelligence (AI). Drawing on Albert Borgmann’s device paradigm and Martin Buber’s I–It and I–Thou distinction, it identifies three ideal types of relation: instrument-relations, which extend human efficacy through skillful engagement; device-relations, which deliver commodified effects through hidden machinery; and companion-relations, which simulate or substitute for human presence. Language-based AI systems make the companion-relation only recently plausible by producing the appearance of agency, subjectivity, and personhood. While acknowledging AI’s possible utility, the essay is critical of treating AI as a companion, especially as a substitute for one’s own presence in focal practices—particularly writing—that cultivate attention, understanding, and authentic human presence. It concludes by urging intentional restraint in outsourcing human authorship and expression to technological surrogates.

Introduction

Humans have always treated other humans as tools, but humans have only recently begun to treat tools as humans. With the emergence of generative and agentic artificial intelligence (AI), many people sense that our relation with this technology will be categorically different from any previous technological relation. This new relation—once firmly relegated to science-fiction—has now become a palpable reality in day-to-day life through the widespread use of large language models (LLMs). The replication of human language in non-human things as though they have the capacity for human intelligence, agency, and subjectivity is a fundamentally different sort of technology than has ever been used. After all, doing things with words, and not just with tools, strikes very near the heart of what it means to be a human person. Language is the distinctive feature of Lewis Mumford’s definition of the human being as “a mind-using, symbol-making, and self-mastering animal” (2014, 384). It is the precondition to the tool-making and tool-using human being.

In the biblical story, human language possesses a unique, almost divine, quality. Adam’s first task in the Garden of Eden is to name each living creature (Gen 2:19). Bringing each animal to the man, God seems curious to discover what those names will be. Humans have now created a technology that can name things. What will it say? The fact that AI can approximate what humans can say—i.e., that it seems to be a mind-using, symbol-making, and self-mastering technology—means that it has the potential for types of relation once reserved for persons. How does this new relation differ from other ways of relating to technology? And what might this new relation mean for written communication? Before getting to those questions, however, let me clarify what I mean by technology and AI.

Technology and the Nature of Generative & Agentic AI

Technology commonly refers to machines of one sort or another, and predominantly to electronic machines (hardware) and/or the digital code embedded in them (software). A broader definition of technology would include everything from simple tools (e.g., a hammer, a pencil, etc.) to the most complex machines. Whether complex or simple, technology, in the sense just described, might be defined as “non-natural objects, of all kinds, manufactured by humans” (Kline 1985, 215). This technology-as-object definition, tends to assume that technologies are neutral instruments under the complete control of the user. However, technological objects are cultural artifacts that cannot be totally separated from the technical knowledge of their production and use, nor can they be value-neutral since their purpose and effects have consequences on the user’s relation to the world (See Borgmann 1984, 7-15; Mitcham 1994). Even a simple pencil is not a technology in isolation of the knowledge of language, the conventions of writing, and the effects of extending communication amongst other things. Technology, in this fuller substantive sense, is the coalescence of knowledge, values, manufacture, and use of an object (see Kline 1985). Technology produces social behaviors, desires, expectations, and rules that are then reconstituted in the development of new technologies. For example, cars and roads extend the capacity for transport, and in the process generate customs, rules, and expectations about speeding, alcohol, and rush hour, leading to new technological developments (e.g., stoplights, speedometers, etc.). In modernity, technological systems have become the background conditions on top of which run social structures and relations. What makes Generative and Agentic AI different from previous technologies is that they are not merely the site of social structures and relations, but are increasingly plausible subjects or participants in them.

While various kinds of AI have been used and embedded in computer technologies for decades, AI technology made significant advances in the 2010s. By the early 2020s, generative AI became a common designation for a class of AI technologies that differed from traditional AI. Traditional AIs are rule-based or expert systems, programmed to automate or optimize particular tasks (e.g., playing chess or mapping road traffic). A generative AI, on the other hand, interprets an input by generating a novel response based on its training data (e.g., it could invent a new game that combines aspects of chess and golf). While traditional AI recognizes patterns to make predictions or decisions, generative AI creates new patterns from its training data (Marr 2023). A still newer class of AI, known as agentic AI, builds on the fundamental creativity of generative AI by utilizing it with a high level of autonomy to plan and execute complex tasks or problems (Finn and Downie, n.d.).

One can argue that generative AI is not truly creative or original since it is only approximating originally human work; or that agentic AI doesn’t really have agency since it has no volition. But these systems can at least mimic human creativity and autonomy enough to make it appear original and volitional. When it comes to producing music or images, generative AI might be considered creative, but in these areas it is not considered to be a person. I suspect that no one generates an image from Nano Banana or a tune from Suno and believes that they are receiving a personal communication from a personal being. We don’t assume that these applications of generative AI have a conscious and subjective self behind their outputs. Yet language models with agentic capabilities are different. When we ask a question and receive from an AI chatbot an accommodating response—not just an answer—it is hard to resist sensing that someone is talking with us; that we are relating to another person. This is what Benj Edwards (2025) calls the “personhood trap.” LLMs, he notes, are “what we might call ‘vox sine persona’: voice without person. Not the voice of someone, not even the collective voice of many someones, but a voice emanating from no one at all.” It is language that makes personhood plausible. This phenomenon had been noticed decades ago (e.g., the “ELIZA effect” in 1966), but the broad availability of this experience through LLMs, marks a major development in the way that the average human can now relate to technology. How might we distinguish various types of human-technology relations in light of the new relation that AI has made possible?

Three Types of Human-Technology Relation

In a video lesson on AI literacy, Joseph Feller from Anthropic Inc., describes different ways that users interact with AI: automation, augmentation, and agency. “Understanding these differences,” he notes, “helps us recognize that AI isn’t just a tool. It’s a technology that can act as a tool, but also as a medium or as a partner or co-creator, and sometimes all of these at once. And this shift from mere tool to powerful collaborator gives technology a new role to play in our creative and problem-solving work” (Anthropic 2025). The variety of roles that AI can play and the flexibility of interactions it encourages make it an ideal site for parsing human-technology relations. Noreen Herzfeld explores her own three categories of AI interaction: tool, partner, and surrogate (2023, 79-82, 175-179). Her concluding remarks, though, are far less optimistic than Feller’s: “AI can be a good tool when used with care. It is an incomplete partner and a terrible surrogate for other humans” (2023, 179). I share her concern that the shift toward investing AI with humanlike interactions and responsibilities, though it may amplify human productivity, could negatively alter human relationality. Technologies always shape their users, whether it is callouses from a shovel or anxiety from social media. We do not yet know how human relationality and sociality will be shaped by technologies that can mimic it.

The typology of human-technology relations that I develop below is loosely based on Herzfeld’s (2023) threefold observation of AI as tool, partner, or surrogate. I extend these three concepts to technological relations in a broader sense. Also, the language and basic distinction of instruments and devices initially came from Andy Crouch (2022, chap. 9). In brief, the three types of relation are:

  1. As instruments (practical tools)
  2. As devices (functional servants)
  3. As companions (existential surrogates)

Before further explanation, I want to clarify several other things about the typology. Firstly, the typology is not an ethical framework, but a phenomenology in the sense that it purports to describe the experience of the world—i.e., the relation between the self and the world, specifically the technological segment of the world. That being the case, I’ve already indicated my suspicion, even opposition, to treating AI with humanlike properties. Techno-optimists and transhumanists would likely disagree, but hopefully they’d still recognize the three types of relation as more or less accurate descriptions of experience, even if they don’t share my moral judgments. Secondly, the categories are types of relation and not types of technology. One person’s instrument is another’s device and still another’s companion. Though many instruments and devices don’t encourage a companion type of relation, it’s possible that they are sometimes anthropomorphized to meet companion-like desires. And while agentic AIs are the most relationally versatile on account of their human language interfaces, using them does not always entail a companion type of relation. And thirdly, the three types or modes of relation are hierarchical or at least located along a continuum from object to subject or from thing to person. I attempt to articulate the major distinctions of each type through Albert Borgmann’s (1984) concept of the device paradigm and with some help from Martin Buber’s (1958) I-It and I-Thou categories. I am aware that the typology is not always consistent with Borgmann’s or Buber’s work, but I find their concepts and frameworks for analysis especially helpful.

Instruments: Practical Tools

The extension of human efficacy through the effortful or skillful use of observable or explicable machinery.

A technology is related to as an instrument when it has an intuitive purpose that amplifies human capacity for physical or mental activities. Instruments require effortful use or engagement. An instrument could be simple or complex, but its operations are observable or at least explicable by the user. Its effects or outputs are also controllable relative to the knowledge or skill of the user. An instrument has no perceived sociality or subjectivity, though it might be anthropomorphized.

The operations and purpose of a hammer or bicycle are observable and intuitive. They also require the acquisition of skill to produce their proper effects. Similarly, a guitar or piano require skillful use to produce music. Even computer applications can be instruments in the sense that their functions are logically explicable and require skill to produce the effect or output. In this sense, while the machinery of a calculator or spreadsheet software is not observable, their operations are intelligible with mathematical knowledge. Acquiring even basic math skills, like acquiring a basic sense of balance for a bicycle, extends a user’s abilities with the tool. The more mathematical or athletic skill a user has, the more the instrument (spreadsheet or bike) can enhance the user’s power.

The instrument relation can occur even with technologies that most people treat as devices. The relation depends on the user. Take, for example, a car enthusiast mechanic who restores old hotrods. He knows how each part operates and how each system works together. For him, the use of the car is joined to its inner operations in his experience, and he relates to the car as an instrument, which required his effort and skill. Most drivers will not relate to their cars like this. They will not give any effort to understanding the internal operations of a car. The means of acceleration are separated from the benefit of speed (this division is part of Borgmann’s device paradigm, which I’ll explore more in the next section). Most only experience speed and not the machinery; the ends and not the means. But for the car enthusiast mechanic, the means and ends are conjoined. They are not in complete union for the mechanic as they are for the runner where effort and motion are one. Still, the means and ends are closely linked such that the action of driving, which “requires no effort, and little or no skill or discipline” according to Borgmann (1984, 202), is in fact the consummation of his effort, skill, and knowledge. He is not “a divided person” as are most drivers, whose “achievement lies in the past” divorced from the present enjoyment of acceleration. Rather, his past effort culminates in the present benefit of speed. Similarly, the act of eating is the culmination of one’s past preparations for the meal. It is eaten in the context of that labor. All technology assumes some separation between the means and ends, the labor and the benefit. For the runner, there is no daylight between means and ends; there is some daylight between means and ends when cycling; and there is more daylight between means and ends when driving. A technology that requires little effort or skill to use does not mean that no one can relate to it effortfully and skillfully. Some, like the car enthusiast mechanic, choose an instrument-relation because it expands their self-efficacy in the world. The salient factors of an instrument-relation, then, are that the operations of the machinery are explicable for the user (to one degree or another) and require the user’s effort or skill in the extension of human efficacy.

To clarify, by instrument-relation, I don’t mean a mere instrumental relation. I mean instrument as in the relation of a musician and their instrument. The pianist does not just use a piano, they play it. The piano has an integrity as a thing (in Borgmann’s sense) that responds to effort and skill. Merely instrumental relations use and set aside the thing as only incidental to the product (in this case music). Instruments (pianos or cars or spreadsheets) are tools that extend self-efficacy in relation to a segment of world. In other words, the world—including the tool itself—is disclosed through the tool not as a commodity for consumption but as a practice of comprehension, effort, and skill. With an instrument, there is always some separation between means and ends, but they are intertwined in the experience of the user. When the means and ends of a technology become totally divided in the user’s experience, an instrument-relation becomes a device-relation.

Devices: Functional Servants

The extension of human consumption through the effortless or skill-less production of effects or commodities from hidden or inexplicable machinery.

A technology is related to as a device when it requires minimal effort or skill to generate disproportionately greater effects or outputs as commodities for consumption. The machinery of a device is inexplicable or hidden in the sense that users don’t know or can’t see “how it works.” Rather, users merely consume the effect or output of the technology divorced from the context of its machinery. Devices are responsive but are not perceived as having agency or subjectivity.

Borgmann’s classic example of a technological device is a modern home heating system. Unlike a fireplace, modern home heating systems require little effort from the user to generate the effect or commodity of heat. Heating technology hides the machinery that produces the commodity. The average user can operate a thermostat with little knowledge and no effort or skill. As with the car example, there may be a class of users who understand and build complex home heating systems. Their labor and benefit might be conjoined, but most will only consume the commodity of warmth. Unlike warmth, however, the speed produced by a car might require some users to develop highly specialized skills. NASCAR drivers are no doubt very skilled athletically and mentally, but they may know nothing of “how the car works.” They can handle the power of the car but its machinery could still be inexplicable to them. Moreover, the production of the effect of acceleration still requires little skill or effort. The point here is that a device, in Borgmann’s terms, divides the means and ends by concealing the former and commoditizing the latter. According to Borgmann, “the concealment of the machinery and the disburdening character of the device go hand in hand. If the machinery were forcefully present, it would eo ipso make claims on our faculties. If claims are felt to be onerous and are therefore removed, then so is the machinery. A commodity is truly available when it can be enjoyed as a mere end, unencumbered by means” (1984, 44). The machinery of the NASCAR is concealed—it is more concealed in a manual-shift sedan and still more concealed in the automatic transmission of an SUV; and it is perhaps completely hidden in the self-driving car. Devices invite consumption without knowledge, effort, or skill. They disburden users of labor but also of comprehension. They do not encourage self-efficacy, but dependence. Thus, we relate to a technology as a device when we experience effortless power through hidden means. By concealing and hiding the machinery, devices present only the commodity. The easier the commodity is procured the more obscured the machinery becomes. Whereas instruments are tools that reveal the world through effortful engagement, devices are servants that commodify the world on command.

The device-relation is not a new kind of human relation. The human use of humans as devices has persisted in many forms, from ancient slavery to contemporary delivery services and content moderators (see Floridi 2023, 27-29). The human machinery behind next day deliveries and curated feeds are not supposed to be seen—like the servants’ staircase—their labor is supposed to be hidden and their products consumed. When a human is reduced to their role or function (slave, waiter, librarian, etc.), they become merely an interface for the production of a commodity. This mode of relation is what Martin Buber (1958) classified as an I-It relation in contrast to an I-Thou relation. Relating to a human only according to their function or role as an It is an objectification that robs them of personhood. A narcissist relates to everyone else as a device; they reduce everyone to an It. For Buber and Borgmann alike, the world of It, the world of the device paradigm, increasingly dominates the modern relation to the world (including the relation to other humans). Technology promises to expedite an I-Thou relation with the world, a resonant relation, by opening it up and revealing its treasures. Technology can bring music, relationships, travel, food, and any other desire within reach; commodifying whatever subjective experience we want. But “when a subjective state becomes decisive,” Borgmann warns, “the search for a machinery that is functionally equivalent to the traditional enactment of that state begins, and it is spurred by endeavors to find machineries that will procure the state more instantaneously, ubiquitously, more assuredly and easily” (1984, 202). This inclination to commodify the world decreases what Buber describes as “man’s power to enter into relation” (1958, 39). The trajectory of Buber’s work is always a movement beyond the I-It to the revelatory relation of the I-Thou. But what happens when devices become subjects? Are they capable of disclosing their presence to us as a Thou? Buber elsewhere observes that idolatry occurs “when a face addresses a face which is not a face” (in Herzfeld 2023, 45). The central heating system produces real heat, “with a telephone it is communication, a car provides transportation, frozen food makes up a meal, a stereo set furnishes music” (Borgmann 1984, 42). Can a device produce a human face—that is, a real human presence? When devices start to produce human-like presence as a commodity, they jump from a device-relation to a companion-relation.

Companions: Existential Surrogates

The extension of human presence through frictionless experience with hidden and inexplicable machinery.

A technology is related to as a companion when it becomes a substitute for personal responsibilities or a subject of interpersonal relationship. A human user either outsources their own presence in the world or replaces the presence of another person with a technology. The machinery that generates the effect of presence is hidden and inexplicable. The technology itself is treated as an existential counterpart or surrogate for oneself or for another subjective being. The companion-relation has two subtypes.

The first subtype of the companion-relation occurs when a technology becomes the subject in an interpersonal relationship. The technology is not treated only as a functional servant to manipulate, but as a subjective being—as a person having qualities like desire, consciousness, and agency. Sticking with car examples, consider the fictional car, KITT (Knight Industries Two Thousand), in the 1980s television series Knight Rider. The car is not just a prop but a character, because it is what we might now call an agentic AI system that uses human language as its primary interface. By contrast, the Batmobile is a mere prop. That car may represent a way of being in the world, but it is not a being, it has no self. KITT, on the other hand, has a presence through language, even being embodied as a car. In the show, KITT is mostly treated as a functional servant—a device with a language interface that yields commodities like speed and various impractical gadgets. But, as with a human servant treated as a device, so with KITT there is an assumption of a human presence that can be in relationship. It seems plausible that in the near future humans will interact with their cars like David Hasselhoff’s character does with KITT (albeit without the laser and flamethrower features). Some agentic AIs will be embodied as functional servants, while others will be intentional replacements for human presences that have no other goal than the company of another presence—a friend or lover. What were once distinctively human presences will be generated by AIs (e.g., “The AI companion who cares” from the company Replika). And if human presence is the primary end, the technology will increasingly enable the frictionless acquisition of that presence. It will require no social skill or reciprocal courtesy. There will be no risk of embarrassment or need to consider the other’s context and self-integrity. Some users will believe that they are present with another person, while the machinery fades completely from view. In terms of Borgmann’s device paradigm, being or presence itself is the commodity, the production of which is hidden in the machinery, requiring no knowledge or skillful engagement to draw out. This type of companion-relation is a surrogate-other relation.

The second subtype of a companion-relation includes concepts like the digital self and human-digital twins (HDTs) as well as Personal Language Models (PLMs) trained to respond on behalf of a person in that person’s voice. Whereas digital selves and HDTs have been around for a while and mediate one’s identity to the world, PLMs and other types of AI can act in the world as an existential stand-in—they not only present the user to the world but interact with the world as the user. This might include simply prompting an AI to write an apology email or generate a research paper to submit as one’s own work all the way up to creating a PLM to be a second self (see companies like Viven, Personal.ai, and Second Me). It might also include delegating the activity of emotional reflection and introspection to a technology—e.g., with “Spotify Wrapped” a user can discover part of their identity through an algorithm’s distillation of the year’s musical experiences (Joseph 2025, 2). Wherever an effect or output fills in for a user’s active presence in the world, there is a companion-relation that is a self-surrogate relation.

I’ve attempted to parse three types of technological relation mainly using Borgmann’s device paradigm. The third type, the companion-relation (and both its subtypes) fits the structure of the second type, the device-relation, but it is distinct because its commodity is unlike any previous commodity afforded by technology. It is the commodity of personhood, effected by language, that gives the impression that someone is there to be encountered and known. I’ve called the commodity human presence or personhood, but in my judgment, it is not the same commodity produced by two different means—as it is the same heat that comes from a fireplace as well as from a central heating system. Though I haven’t argued for it, I follow Herzfeld’s pessimistic view, herself following Buber, on the possibility of AI personhood. “We must,” she asserts, “avoid the category error of personifying AI. A computer cannot be, in Martin Buber’s terms, a Thou. It is always an It. It has no consciousness, no emotions, no will of its own, and these things are not ‘right around the corner’” (2023, 152). At most, what we can attribute to AI is what Buber calls individuality, in contrast to personhood. “The person,” Buber says, “becomes conscious of himself as sharing in being, as co-existing, and thus as being. Individuality becomes conscious of itself as being such-and-such and nothing else. The person says, ‘I am,’ the individual says, ‘I am such-and-such.’ … Individuality in differentiating itself from others is rendered remote from true being” (1958, 63-64). Buber’s distinction here is directed toward humans who are never purely one or the other. Individuality has to do with identity, whereas personhood has to do with participation in Being (a relation that is prior to language [1958, 39]). AI, as it were, is pure individuality in Buber’s sense. Inasmuch as it could be aware, it is only aware that it exists for such-and-such. It has no feeling of absolute dependence, no sensus divinitatis, and no way to gain it. The use of language simulates human participation in Being, but in reality it has less personhood than an ant and as much human presence as a hammer. These judgments aside, AI’s simulation of human presence is a potent and compelling experience because of language. And for that reason, we should guard the act of writing with special concern.

Authorship as Focal Practice

Borgmann’s answer to the device-shape of modern life was to cultivate what he called focal practices. These practices are centered on focal things—like a fireplace that provides heat, but also requires practices around chopping and readying the firewood, tending the fire, coordinating these efforts with family members, and gathering in the same room for warmth. He observes that “we are inclined to think of these additional elements as burdensome” (Borgmann 1984, 42), but in giving them up we also give up the development of attention, strength, skill, and sociality that the focal thing encouraged. I am grateful that indoor plumbing has disburdened me of the practice of using an outhouse in Minnesota winters. There are a great many technologies, including central heating, that have improved the lives of modern human beings. The progress of technology, however, reshapes the lives of users and even of whole societies. The industrial technologies of the twentieth century, for example, gradually outstripped human labor (IHE 2025, min. 28). The long-term byproduct of that substitution was not the mass unemployment that had worried so many. The major byproduct, rather, was a stationary lifestyle. In short, humans in post-industrial societies are “not in shape.” Cue the rise of an entirely new industry of gym memberships, marathons, Pelotons, etc. In post-industrial societies, the average person desires (or at least feels they should desire) to form healthy (focal) practices around eating and exercise.

But technological progress, while disburdening us of high child and maternal mortality rates amongst other genuine goods, also imposes the burden to establish new practices (like exercise) without the settled and necessary customs once required from focal things. Someone in the household had to go chop wood—they had to learn that skill and strengthen those muscles. The fireplace made a claim on everyone in the household. Technological devices, on the other hand, do not demand anything from their users. They do not create customs of effortful engagement but of compulsive consumption. To escape the device-shaped life must be a choice. “The human ability to establish and commit oneself to a practice,” Borgmann avers, “reflects our capacity to comprehend the world, to harbor it in its expanse as a context that is oriented by its focal points” (1984, 207). Perhaps nowhere is this more true than with how we use words.

The bedtime story, for example, is a focal practice that orients people toward one another and toward the world. But perhaps you’re not good at it. You could develop the skill by committing to tell your child an original story one night a week. Or you could use a customary instrument called a storybook showing your skill with reading, building shared vocabulary, cultivating shared experience, enjoying one another’s presence. Or you could use a device like a smartphone, and prompt your preferred LLM or your PLM to tell a 7-10 minute bedtime story for a seven-year-old girl who lives in the Chicago suburbs. It shouldn’t have monsters or other scary things that could cause nightmares. And it should encourage [insert preferred ideological value here]. Your child gets a decent story while you attend to a household chore or close your eyes for a few minutes. Of course, you don’t have to use the LLM this way. Many users might engage with it in a back-and-forth, choose-your-adventure story with the child’s active participation in prompting it. It could become an elaborate storytelling practice, where each week parent and child dream up new ways they can direct the story. But is the technology shaped that way? Does it invite a focal practice or consumption? A focal practice with a storybook requires effort and skill, and orients the reader and the listener toward each other’s presence. The LLM requires none of these things. It rather invites the effortless procurement of entertainment even if it doesn’t have to be used that way. When a child asks for a story, they don’t just get a story. They get a host of other goods that accrue from a practice—and, perhaps most importantly, they get you; not in your individuality as a storyteller, but in shared personhood. Technologies that commodify personhood are often irresistible because they offer presence without a person.

While technology has alleviated many hardships, generative and agentic AI have the potential (and perhaps the tendency) to disburden us of the need for authentic human presences. Written language—even the most dry, technical, and derivative sort—has until recent years presupposed a human person(s) behind the text. A text was assumed to be an artifact of human communication regardless of meaning or authorial intent. With an AI text, however, the human person operates more like a Thomistic primary cause, with the secondary causes (AIs) exercising something like free will. Part of the appeal of technology is that it could give us God-like control or sovereignty. We imagine that the more world we can bring under our control, the more able we’ll be to resonate with it (Rosa 2020). In modernity, technology has become the main way we attempt to dominate each segment of world. Hidden machinery commodifies the world, increasingly placing its goods within reach. We can control the heat with a button, our reproduction with a pill, and human-like presence with a prompt. The practice of writing, difficult and time-consuming, is so because words are focal things. They orient us to the world and the world to us. They require the development of attention, skill, and knowledge. We are in real time being disburdened of these requirements. It may take a generation or two, but we will soon be intellectually “out of shape” (even by today’s standards). In all likelihood, AI will become a cognitive substitute for genuine human thought for a vast number of people. It will do much of our “thinking” for us because it does much of our writing. Yet as with industrial technologies that replaced physical labor, the long-term byproduct of AI might not be the apocalyptic takeover of human existence or even joblessness, but the need to exercise the mind, to understand for oneself, and to respond with one’s own voice (IHE 2025, min. 28). Founding a focal practice “is to guard a focal concern, to shelter it against the vicissitudes of fate and our frailty” (Borgmann 1984, 207). Outsourcing our words to a technology seems to make us the masters of our own fate, but in reality it makes us dependent and frail. Words need to be practiced.

References

Anthropic. 2025. Lesson 2A: Why Do We Need AI Fluency? With Joseph Feller. AI Fluency: Framework & Foundations Course. 06:22. https://www.youtube.com/watch?v=4szRHy_CT7s.

Borgmann, Albert. 1984. Technology and the Character of Contemporary Life: A Philosophical Inquiry. University of Chicago Press.

Buber, Martin. 1958. I and Thou. 2nd ed. Translated by Ronald Gregor Smith. The Scribner Library. Scribner.

Crouch, Andy. 2022. The Life We’re Looking for: Reclaiming Relationship in a Technological World. Convergent Books.

Edwards, Benj. 2025. “The Personhood Trap: How AI Fakes Human Personality.” Ars Technica, August 28. https://arstechnica.com/information-technology/2025/08/the-personhood-trap-how-ai-fakes-human-personality/.

Finn, Teaganne, and Amanda Downie. n.d. “Agentic AI vs. Generative AI.” IBM.Com. Accessed January 13, 2026. https://www.ibm.com/think/topics/agentic-ai-vs-generative-ai.

Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press.

Herzfeld, Noreen L. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Fortress Press.

IHE (Institute for Human Ecology). 2025. Between God and the Machine: How Should Christians Think About AI? Panel discussion with Ross Douthat, Michael Baggot, Will Wilson, and Brian J. A. Boyd. September 23, 2025. YouTube. 01:33:44. https://www.youtube.com/watch?v=FnZp9MpXC3s.

Kline, Stephen J. 1985. “What Is Technology?” Bulletin of Science, Technology & Society 5 (3): 215–18. https://doi.org/10.1177/027046768500500301.

Marr, Bernard. 2023. “The Difference Between Generative AI And Traditional AI: An Easy Explanation For Anyone.” Forbes, July 24. https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/.

Mitcham, Carl. 1994. Thinking through Technology: The Path between Engineering and Philosophy. University of Chicago Press.

Mumford, Lewis. 2014. “Tool Users vs. Homo Sapiens and the Megamachine.” In Philosophy of Technology: The Technological Condition: An Anthology, 2nd ed., edited by Robert C. Scharff and Val Dusek. John Wiley & Sons.

Rosa, Hartmut. 2020. The Uncontrollability of the World. Translated by James C. Wagner. Polity Press.