AI and Personhood A Theological Perspective

Main Article Content

Daniel Hackmann

Abstract

The meaning of personhood in the context of AI and AI-driven systems is a topic attracting huge amounts of attention from philosophers, social scientists, and legal scholars. The current discussions tend in not one, but two implicit directions regarding AI and personhood. On the one hand is the question of whether we should assign personhood to AI systems, and if so, how and when should we do this. The other direction has to do with our notion of personhood, specifically as human nature and being, and its implications in a world of AI systems. I will briefly lay out some of the positions taken on this complex topic and show that a Christian theological perspective using the concept of imago Dei can shed fruitful insights in this important area.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hackmann, Daniel. “AI and Personhood: A Theological Perspective”. Verba Vitae 1, no. 3-4 (October 11, 2024): 97–111. Accessed October 18, 2024. https://verba-vitae.org/index.php/vvj/article/view/32.
Section
Philosophy and Philosophical Theology

References

John McCarthy, “What is AI? / Basic Questions,” http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html. Accessed July 29, 2024.

“Narrow AI” is perhaps easier to define as AI which enhances human capabilities, for example in the area of healthcare, where it can be used for more rapid diagnosis or guidance in neurosurgery. Here the goal does not seem to be superhuman intelligence or thinking. But even technological optimists see a need for caution here, especially in such areas as autonomous vehicles.

Cited in Will Douglas Heaven, “What is AI?,” MIT Technology Review, July 10, 2024. https://www.technologyreview.com/2024/07/10/1094475/what-is-artificial-intelligence-ai-definitive-guide/. Accessed July 29, 2024.

First two phases of Gartner’s famous Hype Cycle research methodology.

Gebru was forced out of Google, mainly because of her two articles where she was critical of AI. She had co-led the Google AI Ethics team.

Cited in Heaven, “What is AI?,” July 10, 2024. https://www.technologyreview.com/2024/07/10/1094475/what-is-artificial-intelligence-ai-definitive-guide/ Accessed July 29, 2024.

Ibid.

Adrienne Mayor, Gods and Robots (Princeton, NJ; Oxford, UK: Princeton University Press, 2018), p. 9.

N. M. Richards and W. D. Smart, “How Should the Law Think About Robots?” in Robot Law, eds., Ryan Calo, A. Michael Froomkin, and Ian Kerr (Cheltenham, UK; Northampton, MA: Edward Elgar Publishing, 2016), 18-21. https://www.google.com/books/edition/Robot_Law/7YpeCwAAQBAJ?hl=en&gbpv=1&dq=Robot+Law&pg=PR3&printsec=frontcover.

Timnit Gebru and Émile Torres, “The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence,” First Monday 29, no. 4 (April 1, 2024). https://doi.org/10.5210/fm.v29i4.13636.

Artificial General Intelligence. Ray Kurzweil, for example, says “The 21st century will be different. The human species, along with the computational technology it created, will be able to solve age-old problems…and will be in a position to change the nature of mortality in a post-biological future.” Cited in John C. Lennox, 2084 (Grand Rapids, MI: Zondervan Reflective, 2020), 44. OpenAI defines AGI in its charter as “highly autonomous systems that outperform humans at most economically valuable work,” “OpenAI charter,” at https://openai.com/charter. Accessed July 15, 2024.

See Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology (New York: Penguin Books, 2005) and Ray Kurzweil, The Singularity is Nearer: When We Merge with AI (New York: Viking, 2024).

Gebru and Torres, “The TESCREAL Bundle.” https://firstmonday.org/ojs/index.php/fm/article/view/13636.

This discussion on Reddit in April 2024 reveals such concerns: https://www.reddit.com/r/singularity/comments/1cztcfa/why_would_the_elites_in_control_of_agi_do/?rdt=60199. Accessed July 16, 2024. The view expressed is that there seems to be no reason why wealthy elites should do anything benevolent with AGI. If the creators of AGI cannot be trusted to do anything benevolent with AGI, what hope do we have that AGI or a superintelligence would have a benevolent view of humanity?

Ibid.

Utopia refers by definition to something that has no place, that does not exist. It is instructive to note that boundless enthusiasm about the progress of science and society was a primary characteristic of 19th century European thought. Much of its optimism was dashed in the onset of World War I.

Part 2 of The Singularity is Near. The subtitle of the book is When We Merge with AI.

See for example, Jeffrey Funk, “Are we close to Peak AI Hype?” Mind Matters, July 12, 2024. https://mindmatters.ai/2024/07/are-we-close-to-peak-ai-hype/. Accessed July 29, 2024.

Mark O’Connell, To Be a Machine; Adventures among Cyborgs, Utopians, Hackers and the Futurists Solving the Modest Problem of Death (New York: Anchor, 2017), 2.

Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 22-23.

Gebru and Torres, in their conclusion, present a definition of AGI as “a system, which

…seems to be an all-knowing machine akin to a ‘god’” and then state, “We argue that attempting to build something akin to a god is an inherently unsafe practice.” “The

TESCREAL Bundle,” section 8. https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599. Accessed September 28, 2024.

John C. Lennox, 2084, 103.

Gebru and Torres point out how discriminatory and racist some of the transhumanist ideas are: “The same discriminatory attitudes that animated first-wave eugenics are pervasive within the TESCREAL literature and community. For example, the Extropian listserv contains numerous examples of alarming remarks by notable figures in the TESCREAL movement. In 1996, Bostrom argued that ‘Blacks are more stupid than whites,’ lamenting that he couldn’t say this in public without being vilified as a racist, and then mentioned the N-word (Torres, 2023a). In a subsequent ‘apology’ for the e-mail message, he denounced his use of the N-word but failed to retract his claim that whites are more ‘intelligent’ (Torres, 2023a). Also in 1996, Yudkowsky expressed concerns about superintelligence, writing: ‘Superintelligent robots = Aryans, humans = Jews. The only thing preventing this is sufficiently intelligent robots.’ Others worried that ‘since we as transhumans are seeking to attain the next level of human evolution, we run serious risks in having our ideas and programs branded by the popular media as neo-eugenics, racist, neo-nazi, etc.’ In fact, leading figures in the TESCREAL community have approvingly cited, or expressed support for, the work of Charles Murray, known for his scientific racism, and worried about ‘dysgenic’ pressures (the opposite of ‘eugenic’) (see Torres, 2023a). Bostrom himself identifies ‘dysgenic’ pressures as one possible existential risk in his 2002 paper, alongside nuclear war and a superintelligence takeover. He wrote: ‘Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (“lover of many offspring”)’ (Bostrom, 2002). More recently, Yudkowsky tweeted about IQs apparently dropping in Norway, although he added that the ‘effect appears within families, so it’s not due to immigration or dysgenic reproduction’ — i.e., less intelligent foreigners immigrating to Norway or individuals with ‘lower intelligence’ having more children.” https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599, section 4.2.

Longtermism “emphasizes the moral importance of becoming a new posthuman species.” See Gebru and Torres, “The TESCREAL Bundle,” section 4.2: https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599

J. Budziszewski, What We Can’t Not Know: A Guide (Dallas, TX: Spence Publishing Company, 2003), 56.

According to Simon Chesterman, “many of the arguments in favour of AI personality implicitly or explicitly assume that AI systems are approaching human qualities in a manner that would entitle them to comparable recognition before the law.” Simon Chester-

man, “Artificial Intelligence and the Limits of Legal Personality,” International and Comparative Law Quarterly 69, no. 4 (October 2020): 831. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1859C6E12F75046309C60C150AB31A29/S0020589320000366a.pdf/artificial-intelligence-and-the-limits-of-legal-personality.pdf Accessed July 15, 2024.

Jennifer Blumenthal-Barby argues that the concept of personhood is unhelpful at best and “at the worst it is harmful and pernicious.” Her suggestion is that bioethicists stop using the term altogether and look for alternatives. Jennifer Blumenthal-Barby, “The End of Personhood,” The American Journal of Bioethics 24, no. 1 (2024): 3. https://doi.org/10.1080/15265161.2022.2160515.

Chesterman, “Artificial Intelligence and the Limits of Legal Personality,” 835.

Michael S. Burdett, Zygon: Journal of Religion and Science 55, No. 2 (June 2020): 349.

A good discussion of this is presented in Gwendolyn J. Gordon, “Environmental

Personhood,” Columbia Journal of Environmental Law 43, no. 1 (2018): 87ff. https://journals.library.columbia.edu/index.php/cjel/issue/view/392. Accessed July 15, 2024.

Interestingly enough, the notion of protection from environmental persons such as a river is not generally discussed. If the Whanganui River is a person, could it not, by the same logic as presented above also be held responsible for deaths and property damage when it floods its banks?

Brandeis Marshall, “No Personhood for AI” in Patterns 4, no. 10 (November 2023): Opening paragraph. https://www.sciencedirect.com/science/article/pii/S2666389923002453. Accessed July 29, 2024.

Lance Eliot, “Legal Personhood For AI Is Taking A Sneaky Path That Makes AI Law And AI Ethics Very Nervous Indeed,” Forbes, November 21, 2022. https://www.forbes.com/sites/lanceeliot/2022/11/21/legal-personhood-for-ai-is-taking-a-sneaky-path-that-makes-ai-law-and-ai-ethics-very-nervous-indeed/. Accessed July 29, 2024.

Ibid.

For an interesting discussion of the clear distinction between us thinking about what it is like to be a bat, which can only amount to our thinking about what we would be like as a bat and what it is like for a bat to be a bat, See Thomas Nagel, “What is it Like to be a Bat?,” The Philosophical Review 83, no. 4 (October 1974): 435-450. https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf

Margaret Boden, “Robot Says: Whatever,” Aeon, August 13, 2018. https://aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less. Accessed July 15, 2024.