Leo XIV issues a message for the digital age: «Safeguarding faces and voices ultimately means safeguarding ourselves»

Leo XIV issues a message for the digital age: «Safeguarding faces and voices ultimately means safeguarding ourselves»

Pope Leo XIV delivered on January 24, 2026, his message for the LX World Day of Social Communications, centered on a clear warning: in the digital age, “safeguarding” communication means protecting the human voice and face against a technology capable of simulating, manipulating, and emptying them of truth. The Pontiff frames the issue as a challenge not only technical, but anthropological, because what is at stake is the dignity of the person and the very possibility of an authentic encounter.

In his message, Leo XIV warns about the dynamics of networks and algorithms that reward quick emotions and push towards polarization; about an uncritical trust in AI as an “oracle”; and about the risks of bots, deepfakes, and “parallel realities” that erode public trust. In response, he proposes a possible but demanding alliance, based on responsibility, cooperation, and education, demanding transparency from platforms and developers, regulations that protect human dignity, and journalism that does not renounce verification and the search for truth.

We leave below the complete message of Leo XIV:

Dear brothers and sisters.

The face and the voice are unique and distinctive features of each person; they manifest their unrepeatable identity and are the constitutive element of every encounter. The ancients knew this well. Thus, to define the human person, the ancient Greeks used the word “face” (prósōpon), which etymologically indicates that which is in front of the gaze, the place of presence and relationship. The Latin term persona (from per-sonare), on the other hand, includes the sound: not just any sound, but the unmistakable voice of someone.

The face and the voice are sacred. They have been given to us by God, who created us in his image and likeness, calling us to life with the Word that He himself addressed to us; a Word that first resounded through the centuries in the voices of the prophets and then became flesh in the fullness of time. This Word—this communication that God makes of himself—we too have been able to hear and see directly (cf. 1 Jn 1,1-3), because it has revealed itself in the voice and in the Face of Jesus, Son of God.

From the moment of his creation, God has wanted man as his own interlocutor and, as St. Gregory of Nyssa says,[1] has imprinted on his face a reflection of divine love, so that he may fully live his humanity through love. Safeguarding human faces and voices therefore means safeguarding this seal, this indelible reflection of God’s love. We are not a species made of biochemical algorithms, predefined. Each one of us has an irreplaceable and inimitable vocation that emerges from life and manifests itself precisely in communication with others.

Digital technology, if we fail in this safeguarding, runs the risk of radically altering some of the fundamental pillars of human civilization, which we sometimes take for granted. By simulating human voices and faces, wisdom and knowledge, conscience and responsibility, empathy and friendship, systems known as artificial intelligence not only interfere in information ecosystems but also invade the deepest level of communication: that of the relationship between human persons.

The challenge, therefore, is not technological, but anthropological. Safeguarding faces and voices means, in the final analysis, safeguarding ourselves. Welcoming with courage, determination, and discernment the opportunities offered by digital technology and artificial intelligence does not mean hiding the critical points, the opacities, the risks.

Do not renounce one’s own thinking

For some time now, there have been multiple evidences that algorithms designed to maximize interaction on social networks—profitable for the platforms—reward quick emotions and penalize, instead, human expressions that require more time, such as the effort to understand and reflection. By enclosing groups of people in bubbles of easy consensus and easy indignation, these algorithms weaken the capacity for listening and critical thinking and increase social polarization.

To this has been added a naively uncritical trust in artificial intelligence as an omniscient “friend,” dispenser of all information, archive of all memory, “oracle” of all advice. All this can further wear down our ability to think analytically and creatively, to understand meanings, to distinguish between syntax and semantics.

Although AI can provide support and assistance in managing communicative tasks, subtracting ourselves from the effort of our own thinking, settling for an artificial statistical compilation, runs the risk in the long term of eroding our cognitive, emotional, and communicative capacities.

In recent years, artificial intelligence systems are increasingly taking control also of the production of texts, music, and videos. Much of human creative industry thus runs the risk of being dismantled and replaced under the label “Powered by AI,” transforming people into mere passive consumers of unthought thoughts, anonymous products, without authorship, without love. While the masterpieces of human genius in the field of music, art, and literature are reduced to a mere training ground for machines.

The question that concerns us, however, is not what the machine achieves or will achieve, but what we can and will be able to do, growing in humanity and knowledge, with a wise use of such powerful instruments at our service. From the beginning, man has been tempted to appropriate the fruit of knowledge without the toil of involvement, search, and personal responsibility. Renouncing the creative process and ceding to machines our own mental functions and imagination means, however, burying the talents we have received in order to grow as persons in relation to God and others. It means hiding our face and silencing our voice.

To be or to pretend: simulation of relationships and reality

As we scroll through our information feeds, it becomes increasingly difficult to understand whether we are interacting with other human beings or with “bots” or “virtual influencers.” The non-transparent interventions of these automated agents influence public debates and people’s choices. Above all, chatbots based on large language models (LLM) are proving surprisingly effective in hidden persuasion, through continuous optimization of personalized interaction. The dialogical and adaptive, mimetic structure of these language models is capable of imitating human feelings and thus simulating a relationship. This anthropomorphization, which can even be amusing, is at the same time deceptive, especially for the most vulnerable people. Because chatbots turned excessively “affectionate,” in addition to being always present and available, can become hidden architects of our emotional states and, in this way, invade and occupy the sphere of people’s intimacy.

Technology that exploits our need for relationship can not only have painful consequences in the destiny of individuals but also injure the social, cultural, and political fabric of societies. This happens when we replace relationships with others with those we maintain with AI trained to catalog our thoughts and, therefore, build around us a world of mirrors, where everything is made “in our image and likeness.” In this way, we allow ourselves to be robbed of the possibility of encountering the other, who is always different from us, and with whom we can and must learn to confront. Without the welcoming of otherness, there can be neither relationship nor friendship.

Another great challenge posed by these emerging systems is that of distortion (in English, bias), which leads to acquiring and transmitting an altered perception of reality. AI models are shaped by the worldview of those who build them and can, in turn, impose ways of thinking by replicating the stereotypes and prejudices present in the data they feed on. The lack of transparency in algorithm design, together with the inadequate social representativeness of the data, tends to keep us trapped in networks that manipulate our thoughts and perpetuate and deepen existing social inequalities and injustices.

The risk is great. The power of simulation is such that AI can even deceive us with the fabrication of “parallel” realities, appropriating our faces and our voices. We are immersed in a multidimensionality, where it is becoming increasingly difficult to distinguish reality from fiction.

To this is added the problem of lack of accuracy. Systems that pass off a statistical probability as knowledge are actually offering us, at most, approximations to the truth, which are sometimes true “hallucinations.” The lack of verification of sources, together with the crisis of on-the-ground journalism that involves continuous work of gathering and verifying information where events occur, can foster an even more fertile ground for disinformation, provoking a growing sense of distrust, bewilderment, and insecurity.

A possible alliance

Behind this enormous invisible force that involves us all, there is only a handful of companies, those whose founders have recently been presented as creators of the “person of the year 2025,” that is, the architects of artificial intelligence. This raises an important concern regarding the oligopolistic control of algorithmic and artificial intelligence systems capable of subtly orienting behaviors, and even rewriting human history—including the history of the Church—often without us even realizing it.

The challenge ahead does not consist in stopping digital innovation, but in guiding it, being aware of its ambivalent nature. It depends on each of us to raise our voice in defense of human persons, so that these instruments can truly be integrated by us as allies.

This alliance is possible, but it needs to be based on three pillars: responsibility, cooperation, and education.

First of all, responsibility. This can be expressed, according to roles, as honesty, transparency, courage, capacity for vision, duty to share knowledge, right to be informed. But in general, no one can escape their own responsibility toward the future we are building.

For those at the top of online platforms, this means ensuring that their business strategies are not guided by the sole criterion of profit maximization, but also by a farsighted vision that takes into account the common good, in the same way that each of them has at heart the good of their own children.

Creators and developers of AI models are asked for transparency and social responsibility regarding the design principles and moderation systems that underpin their algorithms and developed models, in order to favor informed consent on the part of users.

The same responsibility is also asked of national legislators and supranational regulators, who are responsible for overseeing respect for human dignity. Adequate regulation can protect people from emotional bonds with chatbots and contain the spread of false, manipulative, or deceptive content, preserving the integrity of information against deceptive simulation.

Media and communication companies, in turn, cannot allow algorithms oriented to win at any cost the battle for a few more seconds of attention to prevail over fidelity to their professional values, oriented toward the search for truth. Public trust is won with accuracy, with transparency, not with the race for any kind of interaction. Contents generated or manipulated by AI must be clearly indicated and distinguished from contents created by people. Authorship and sovereign ownership of the work of journalists and other content creators must be protected. Information is a public good. A constructive and meaningful public service is not based on opacity, but on transparency of sources, inclusion of involved subjects, and a high standard of quality.

We are all called to cooperate. No sector can face the challenge of guiding digital innovation and AI governance alone. For this reason, it is necessary to create safeguard mechanisms. All stakeholders—from the technology industry to legislators, from creative companies to the academic world, from artists to journalists, educators—must be involved in building and making effective a conscious and responsible digital citizenship.

This is what education aims at: to increase our personal capacities to reflect critically, to evaluate the reliability of sources and the possible interests behind the selection of the information that reaches us, to understand the psychological mechanisms that activate, to allow our families, communities, and associations to develop practical criteria for a healthier and more responsible communication culture.

Precisely for this reason, it is increasingly urgent to introduce into educational systems at all levels also media literacy, information literacy, and AI literacy, which some civil institutions are already promoting. As Catholics, we can and must contribute our part, so that people—especially young people—acquire the capacity for critical thinking and grow in the freedom of the spirit. This literacy should also be integrated into broader initiatives of lifelong education, reaching also the elderly and marginalized members of society, who often feel excluded and powerless in the face of rapid technological changes.

Media, information, and AI literacy will help everyone not to adapt to the anthropomorphizing drift of these systems, but to treat them as instruments, to always use external validation of sources—which could be inaccurate or erroneous—provided by AI systems, to protect one’s privacy and data by knowing security parameters and challenge options. It is important to educate and self-educate to use AI intentionally, and in this context to protect one’s image (photo and audio), one’s face and one’s voice, to prevent them from being used in the creation of harmful contents and behaviors such as digital frauds, cyberbullying, deepfakes that violate people’s privacy and intimacy without their consent. Just as the industrial revolution required basic literacy to enable people to react to novelty, similarly the digital revolution requires digital literacy (along with humanistic and cultural training) to understand how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the appearance of certain contents in our information feeds, what the budgets and economic models of the AI economy are and how they can change.

We need the face and the voice to speak again of the person. We need to safeguard the gift of communication as the deepest truth of man, toward which to orient also every technological innovation.

In proposing these reflections, I thank all those who are working toward the ends set forth here and heartily bless all those who work for the common good with the means of communication.

Help Infovaticana continue informing