In today's world, the debate on whether AI itself is good or bad is totally in the hands of the driver. If the driver is good, then AI will be used for good, and if the driver is bad, then examples of AI being used in warfare are evident to us. We have seen monstrous development in technology, but alas, still no significant work is done on controlling the monster. History shows us the example of J. Robert Oppenheimer who invented the atomic bomb; however, later he confessed, “Now I am become Death, the destroyer of worlds,” highlighting how knowledge without a firm moral hand can harm humanity. On the other hand, we have also seen science doing wonders in medicine, technology and serving humanity. Weak knowledge with strong morals can do more good than strong knowledge with bad morals.
A wonderful example is Pope Francis, who is, interestingly, also remembered as an AI ethicist, as he raised a moral voice on AI being used for human dignity and not just for corporate efficiency. He rolled out the Rome Call for AI Ethics with players like Microsoft, IBM and the Italian government, highlighting the ethical use of AI with guided principles like transparency, inclusion, accountability, reliability and security. Pope Francis also became the first pontiff to address a G7 Summit on AI Ethics in June 2024. He also bridged religious ethics and secular ethics.
'The Rome Call' Plan
In February 2020, the Pontifical Academy for Life (a Vatican body), together with tech giants IBM and Microsoft, the UN's Food and Agriculture Organization (FAO) and Italy's Ministry of Innovation, signed the “Rome Call for AI Ethics” in Rome. Pope Francis lent his support to this initiative, urging that development of AI be guided by a strong moral compass.
The Rome Call articulates six guiding principles for AI:
1. Transparency 2. Inclusion
3. Accountability 4. Impartiality
5. Reliability 6. Security & Privacy
In essence, it calls for “algor-ethics” – an ethical framework for algorithms to ensure that AI serves all humans and the common good, respects the dignity of each person and does not focus solely on profit or replace workers entirely. Pope Francis added:
“Today, while much of society debates whether—or when—AI will replace humans at work, Catholic teaching pushes us to ask a deeper question: What are the risks if AI deprives humans of the work that makes us human?”
The Rome Call emphasizes that AI systems should be understandable (transparent) to all, treat everyone fairly without discrimination, have clear responsibility for outcomes, be impartial and free from unfair bias, be reliable and respect privacy. Signatories committed to upholding and promoting these principles globally.
'The Rome Call' Outcome
The Rome Call has since grown into a multi-faith, international ethical movement. In January 2023, representatives of the world's three Abrahamic religions – Christianity, Islam and Judaism – came together at the Vatican to jointly endorse the Rome Call. At this historic interfaith event, the Chief Rabbi of Israel and a prominent Muslim leader from the UAE joined Catholic officials in signing the Call's principles. Pope Francis welcomed this “Abrahamic commitment,” stating that algor-ethics should be present in public debate and technical development, ensuring “no one is excluded” from the benefits of AI. The Pope specifically warned: “It is not acceptable that the decision on the life and fate of a human being be entrusted to an algorithm”, underlining the need for human-centered values in AI.
Following the faith leaders' endorsement, the original corporate partners (IBM, Microsoft, etc.) reaffirmed their commitment to designing AI systems in line with the Rome Call's principles. Over time, additional tech companies have signed in (e.g. Cisco in 2022 and Qualcomm in 2023 joined the pledge). The Rome Call's influence can be seen in how it bridges ethical discourse with practical industry pledges. While it is not a law or formal regulation, it has raised awareness about AI ethics in faith communities and added moral weight to calls for “AI at the service of humankind”. Its principles closely mirror secular AI ethics codes, reinforcing cross-sector consensus on ideas like transparency, fairness (impartiality) and accountability but grounding them in an appeal to human dignity and the common good. The Vatican continues to promote these ideas through conferences and its RenAIssance Foundation, positioning religious leadership as a partner in global AI governance dialogues.
A Moral Challenge
'Never summon a power you can't control' – Yuval Noah Harari
The challenge of AI today is a moral one. Therefore, a strong moral hand rooted in ancient wisdom is the need of the hour. The same greedy capitalists who have destroyed this planet, leading to everyone facing the climate crisis, will not even think before using AI for their own greed. Harari further warns that as algorithms push humans out of the job market, wealth and power might become concentrated in the hands of a tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality. The pace at which AI is advancing is scary; experts of AI say that it is getting hard to catch up with the technological changes of AI. Again, this is a commercial tool that we are in awe of, and we don't know what real AI is capable of that is hidden from the masses.
AI for Profit or AI for Good?
The track record of corporates shows that profit alone cannot be trusted to guide AI development, no matter how noble the public messaging may appear. In 2020, IBM, Amazon and Microsoft were forced to halt their facial recognition programs because public pressure exposed how easily these tools could fuel racial bias and violate civil liberties. At the same time, the few ethical voices inside the industry were being pushed aside. On December 2, 2020, Google's own AI ethics co-lead, Dr Timnit Gebru, one of the world's most respected researchers in the field, was removed from her position after co-authoring a paper that highlighted the dangers of large language models: their racial and gender biases, their environmental cost and their potential to reproduce harmful stereotypes at scale. Google executives asked her to retract the paper or remove her name, a request she refused. As she later put it, “I will not pretend a problem does not exist just because it is inconvenient for the company.” She further added that “Companies are not going to self-regulate. We need something better than a profit motive to govern these systems.”
Religious values can help us make right decisions because religion places creation at the center. The West has superior technology; there is no doubt about that. However, the East has a rich history of religious values that can be a controlling hand of the West. There is no better time for advocacy of making the East heard or, better yet, creating technology from the East to tame the monsters created in the West.
The writer is a Space Generation Advisory Council member and a space sustainability writer.




