Thursday, April 3, 2025

Vatican document on artificial intelligence, "Antiqua et Nova"

There is an interesting discussion on the America Media site of the Vatican document, "Antiqua et Nova", regarding artificial intelligence:

New Vatican document on A.I. warns against ‘creating a substitute for God’ | America Magazine

The article discusses both the immense potential and the ethical and anthropological challenges of AI.

"Known by its Latin title, “Antiqua et Nova” from the opening words of the text—“with wisdom both ancient [antiqua] and new [nova]”—the 30-page document said that “there is broad consensus that AI marks a new and significant phase in humanity’s engagement with technology, placing it at the heart of what Pope Francis has described as an ‘epochal change.’”

“...As AI advances rapidly toward even greater achievements,” ..., “it is critically important to consider its anthropological and ethical implications. This involves not only mitigating risks and preventing harm but also ensuring that its applications are used to promote human progress and the common good.”

"...The text begins “by distinguishing between concepts of intelligence in AI and in human intelligence.”

"It recalled how the concept of “intelligence” in A.I. has evolved over time and that “a significant milestone occurred in 1956 when the American computer scientist John McCarthy organized a summer workshop at Dartmouth University to explore the problem of ‘Artificial Intelligence,’ which he defined as ‘that of making a machine behave in ways that would be called intelligent if a human were so behaving.’ That workshop launched a research program focused on designing machines capable of performing tasks typically associated with the human intellect and intelligent behavior.”

"Since then, it said, “AI research has advanced rapidly,” and, as a result, “many tasks once managed exclusively by humans are now entrusted to AI” and “many researchers aspire to develop what is known as ‘Artificial General Intelligence’ (AGI)—a single system capable of operating across all cognitive domains and performing any task within the scope of human intelligence.” Commenting on this, the Vatican noted that underlying this project “is the implicit assumption that the term ‘intelligence’ can be used in the same way to refer to both human intelligence and AI.” But, it remarked, “In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, ‘intelligence’ is understood functionally....“AI cannot currently replicate moral discernment or the ability to establish authentic relationships.”

"...Part IV of the Vatican document is devoted to “the role of ethics in guiding the development and use of AI.” It acknowledged that while technology has “remedied countless evils…not all technological advancements in themselves represent genuine human progress.”

"...The Vatican, which has hosted several meetings on A.I. over the past decade, reported that “concerns about the ethical implications of technological development are shared not only within the church but also among many scientists, technologists, and professional associations, who increasingly call for ethical reflection to guide this development responsibly.”

"It recalled that Pope Francis told the G7: “Technological products reflect the worldview of their developers, owners, users, and regulators, and have the power to ‘shape the world and engage consciences on the level of values.’… Therefore, the ends and the means used in a given application of AI, as well as the overall vision it incorporates, must all be evaluated to ensure they respect human dignity and promote the common good.”

"...In this context, the Vatican document said that “the concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns.” 

"...Speaking of A.I., the economy and labor, the Vatican noted that A.I. is being increasingly integrated into economic and financial systems and warned that a few “large corporations” may stand to profit from A.I. more than “the businesses that use it.” It added that “while AI promises to boost productivity by taking over mundane tasks, it frequently forces workers to adapt to the speed and demands of machines rather than machines being designed to support those who work.”

"...In a section devoted to A.I. and health care, the Vatican acknowledged that while A.I. holds “immense potential in a variety of applications” in medicine, it should “enhance” but not “replace the relationship between patients and healthcare providers.”

"Similarly, speaking of A.I. and education, the Vatican emphasized “that the physical presence of a teacher creates a relational dynamic that AI cannot replicate.” At the same time, it said, “AI presents both opportunities and challenges. If used in a prudent manner, within the context of an existing teacher-student relationship and ordered to the authentic goals of education, AI can become a valuable educational source.”

"The Vatican also acknowledged the danger of A.I.-generated misinformation and deepfakes, warning that these could “gradually undermine the foundations of society” by “fueling political polarization and social unrest.” It called for “careful regulation” of A.I.-generated media."

"Speaking to one of the most common criticisms of A.I., the amount of energy and water it requires and its significant contributions to CO2 levels, the Vatican said, “It is vital to develop sustainable solutions that reduce their impact on our common home.” It also listed some ways that A.I. could be used to protect the environment, including by supporting sustainable agriculture, optimizing energy usage, and providing early warning systems for public health emergencies."

"The document next focused on A.I. and warfare. It recalled Pope Francis’ words in the 2024 Message for the World Day of Peace that “the ability to conduct military operations through remote control systems has led to a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use, resulting in an even more cold and detached approach to the immense tragedy of war.”

"The last topic raised by the Vatican document related to A.I. and humanity’s relationship with God. Here it stated that “the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that ‘have mouths but do not speak; eyes, but do not see; ears, but do not hear’ (Ps. 115:5-6), AI can ‘speak,’ or at least gives the illusion of doing so (cf. Rev. 13:15).”

The article concluded:

"It is vital to remember that AI is but a pale reflection of humanity—it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived ‘Other’ greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself—which, in this way, becomes enslaved to its own work."

12 comments:

  1. ...it is crafted by human minds (therefore subject to erroneous assumptions), trained on human-generated material (garbage in, garbage out, responsive to human input (dependent upon the questions that you ask), and sustained through human labor (both systematic and idiosyncratic errors).

    I think AI has been over-hyped. As we become aware of the problems, the bubble will burst.

    AI is not alone in the pantheon of modern gods: money, the stock market, television, the automobile, etc.

    I disagree however that we are worshipping humanity as god(s). As throughout history we are simply worshipping the works of our hands.

    ReplyDelete
    Replies
    1. Jack Has think you are right that we are worshipping the work of our hands (or maybe we should say the work of our brains).
      As for AI having been over-hyped, one of the NYTimes writers would agree with you. I'm planning to do a "part 2" AI post later on featuring her point of view.

      Delete
    2. Autocorrect error, my first sentence above should read "Jack, I think you are right..."

      Delete
  2. I will need to plow through the document when I get a chance. The quotes highlighted here are fine but, candidly, they don't really cast any new light on the ethical concerns already being raised by those who are engaged in AI development, marketing and implementation.

    The one thing I read here which I haven't run across before is the idea of people literally idolizing AI. That's an area of human activity that most secular observers may agree is "proper" to the Catholic church. The ethical concerns, the impact on the labor market, the impact on warfare and so on: as I say, it's all fine, but it just sort of rehashes what people already are talking about.

    I don't doubt that the church is playing catch-up here to the reality of AI's swift implementation/penetration into economic activity and other aspects of human society.

    ReplyDelete
    Replies
    1. About "...the ethical concerns already being raised by those who are engaged in AI development, marketing and implementation", I'm sure some of them are considering all the ramifications carefully and thoughtfully. But some of the players are going about the development and implementation of AI in a hell-bent-for-leather way in a race for the top. A phrase I am reading lately is " the Singularity". I think part of it is gamer and sci-fi hype. But they're not stopping to consider if a little caution might be in order.

      Delete
  3. Several months back, I got several books on AI recommended by Statista. After looking through them, and reading parts of them, I decided that I probably understand the AI process better than most even though I have never used an AI program.

    The great hopes that are being pinned on AI largely result from the huge databases of information that are now available to AI programs and the great amount of compute power now available to analyze those large databases.

    However, computer databases and state of the art computer systems are things that I dealt with for about 30 years from 1982 through 2002 in two countywide mental health systems.

    In both those systems, the organization provided me with a full-time research assistant whose major task was to maintain the quality of the data provided by clinicians, e.g. demographics of each person, their presenting problems, diagnoses, amounts and types of therapy, etc.

    In the beginning I wonder how I could ever help clinicians with computers since they already had all the data and more in their records. Surely knew their clients better than I could with the computer.

    I was wrong. We humans are no match for computers when it comes to maintaining and analyzing large amounts of data. Over the years I would estimate that fully half of the major findings of my research were things that were not on the radar screens of the clinicians. Even the other half were often things that the computer gave differing viewpoints, e.g. revealing major problems as not very major, and minor problems as really major.

    ReplyDelete
  4. In Toledo Ohio in the 1980s we had a natural experiment. There were four mental health agencies and one substance abuse agency. We all used the same computer system, with the same types of data being entered, and the same report generators.

    Two of the mental health agencies had Ph.D. social psychologists using the report generator. We did very sophisticated things with reports and made use of the computer central to running our agencies.

    Two other mental health agencies had M.S.W. clinical directors. They understand how to get reports out of the computer, but used them only for very simple things, like keeping track of caseloads.

    The substance abuse agency had an associate of arts computer person who oversaw entry of the data in the system (required by the mental health board) but could not figure out how to get reports out of the system.

    Large computer data systems require not only good data entry and database maintenance, they are very dependent upon their programs, and the ability of users to use those programs. In Toledo, those of us with Ph.D. could do marvels with a relatively sophistical report generator.

    However, when I went from agency level to board level, I begin to use SPSS (statistical package for social scientists). It requires a lot of programming skills to use it. I tried in vain to teach clinicians how use it. One of them who later had to use SPSS for her master's degree, thanked me for all my efforts in trying to teach her how to use it.

    ReplyDelete
  5. With any AI program, very sophisticated users are going to get much more out of it. I could probably teach an AI program to do all the analyses that I did with SPSS so that I would only have to give simple commands, (I suspect SPSS probably now includes options like that) but I don’t think that the average clinician will be able to ask it to do what I did unless the database also includes many examples of what I did in the specific situations that I used it (unlikely).

    Finally, although AI has the potential to be neutral in its advice, I doubt that it will have enough social skills to compete with my reputation. People in our systems knew that I was there to let the data speak for itself. I had no favorite persons, agencies, etc. That was because I brought everyone into the data analysis at all points. I tried to make the situation as clear as possible without making recommendations. When someone said “ Jack, the obvious implication is…” I would answer “What do other people think about that suggestion.” People came to trust me as a person with whom they frequently interacted.

    Yes, I could train an AI program to do similar things. But I maintained my role by working day by day to show no bias to anyone even in conversations with my closest associates. I think computers are going to have a tough time imitating what I did.

    As the recent book Friends has argued our brains are constructed to deal with about 150 social relationships with inner circles of 5, 15, and 50 persons. Maybe a computer can become one of those friends, but I doubt whether it can replace many of them. I think it would have to develop a personality to become one of our friends.

    ReplyDelete
  6. There is a seminar on AI in May on a Friday evening and Saturday at the Benedictine retreat house in our area. The presenters are Creighton University professors, their specialty is information technology. Haven't made up my mind if I will go yet. I have kind of a visceral resistance to AI, but maybe I have a duty to become better informed. I am not computer illiterate, I used computers at work, and I have an android phone and a tablet. We have a PC also. But a seminar on AI is not my idea of an enjoyable weekend.

    ReplyDelete
  7. All the hype about AI reminds me of the hope for the internet which was to be the great egalitarian leveling technology. The whole internet thing was instituted by ARPA in the first place for military purposes. Since then, whatever social utility it had has been enclosed by the technocapitalists and it is now the greatest tool for simultaneously brainwashing and surveilling the populace. Now, this is being enhanced by AI. I have no fear of AI becoming sentient and removing the human race. My real fear is that it is the tool of very nasty and very sentient beings like Musk and Bezos and their ilk and THAT could end up removing the human race. The main advantage of AI for these lionized psychopaths is that it has no feelings or thoughts of its own and will follow the dictates of the rich without a nanosecond of hesitation. It will be the perfect denier of medical coverage for United Healthcare. A robot dog with a .45 caliber nose will kill and have no problem later with moral injury nightmares and PTSD. Even IDF soldiers are getting those. The energy needs of AI will be too great and will also collapse with the inevitable collapse of the energy system. But, in the meantime, a lot of harm will be done. I’m reminded of C.S. Lewis saying that the only thing wrong with modern science is that it was born in a bad neighborhood. AI is growing up in a really, really bad neighborhood.

    ReplyDelete
    Replies
    1. I didn't know until lately what an energy hog AI could be.

      How did the "Hands Off" event go? It looked like there were a lot of people came out for it, according to what I saw on news sites. I think it was good they held it in a lot of locations, not just one. Hopefully it sent a message

      Delete
    2. Katherine, around 500 showed in Courthouse Square in Stroudsburg. Pretty good turnout but hoping it grows. I think they ran a bit too long. I left after standing two hours fifteen minutes. I could barely walk at first but I loosened up as I kept going.

      Delete