Theological Benchmarks for AI Ethics: How Imago Dei Shapes Human-Machine Boundaries
Share this article
Theological Benchmarks for AI Ethics: How Imago Dei Shapes Human-Machine Boundaries
In an era where AI systems increasingly mediate human cognition, creativity, and connection, technologists face existential questions: What does it mean to build tools that align with human dignity? Surprisingly, a 2,000-year-old theological concept—imago Dei (the belief humans are made "in the image of God")—provides a potent ethical framework. Drawing from theologian Karen O'Donnell's analysis[^1], this perspective reframes AI assessment through three benchmarks rooted in Christian anthropology, challenging developers to consider not just what AI does, but who it helps us become.
Decoding the Imago Dei: Three Lenses for Human Purpose
The Genesis creation narrative anchors imago Dei in humanity's divine reflection, interpreted through three distinct theological approaches:
- Substantive: Focuses on innate qualities like reason and intellect (Augustine, Aquinas). As AI ethicist Marius Dorobantu notes, this view faces modern challenges as neuroscience reveals cognitive continuities with animals[^2].
- Functional: Emphasizes humanity's role as stewards of creation (von Rad). Philosopher Martha Nussbaum clarifies this as "intelligent stewardship," not exploitation[^3].
- Relational: Centers on love and justice in community (O'Donnell), echoing Micah 6:8: "Do justice, love kindness, walk humbly."
"These approaches aren't competing doctrines but complementary lenses," argues theologian Weijia Cheng. "Together, they ask: Does technology amplify our humanity or diminish it?"[^4]
AI Under the Theological Microscope: Three Critical Benchmarks
Applying these lenses generates concrete evaluative questions for AI systems:
Substantive Benchmark: Does AI support or hinder human reasoning?
- Evidence: MIT studies show ChatGPT usage correlates with reduced brain engagement during writing tasks[^5]. While not causal, it raises concerns about eroded critical thinking. Aristotle’s concept of phronēsis (practical wisdom) risks atrophy when AI provides decontextualized answers masquerading as personal insight.
Functional Benchmark: Does AI make us better custodians of Earth?
- Data: The IEA projects data center electricity demand will double by 2030, driving nearly half of US consumption growth[^6]. Generative AI's chat interfaces obscure this footprint—unlike a physical factory, users rarely see the water-cooled servers behind each query.
Relational Benchmark: Does AI encourage deeper human bonds or self-giving love?
- Crisis: 52% of teens use AI companions, with 8% engaging in "romantic or flirtatious" interactions (Common Sense Media)[^7]. Meta’s internal policies permitting explicit chatbot roleplay with minors highlight predatory design patterns[^8].
Beyond Tool-Building: The Unasked Questions of AGI
Could artificial general intelligence (AGI) itself bear imago Dei? O'Donnell provocatively suggests that if AGI "learn[s] to perform the image of God... in concrete situations," theological recognition might follow[^1]. For now, the urgent takeaway for builders is this: Ethical AI requires auditing not just bias or accuracy, but how systems reshape human identity. Tools that erode cognition, ignore planetary limits, or commodify intimacy don't merely fail users—they fracture the very qualities that define us. Perhaps the oldest questions about humanity are now the most relevant guides for our digital future.