(OSV News) — Since the generator explosion artificial intelligence started in the 2020s – with its ability to generate human-like text, realistic images and compelling movies – AI users and developers have warned that consistent regulatory guardrails are needed to protect against documented harm, a problem of special concern has Pope Leo XIV.
On December 11, 2025, President Donald Trump signed a decree bearing the promising title “Ensuring a national policy framework for artificial intelligence”. Yet Trump’s executive order, which calls for federal regulation to replace state regulation, specifically states that “AI companies must be free to innovate without burdensome regulation.”
OSV News asked Taylor Blackfounding director of the Leonum Institute for AI & Emerging Technology at The Catholic University of America and Director of AI & Venture Ecosystems at Microsoft Corporation, to share his thoughts on how or if these two goals can be reconciled and what the Catholic faith has to say about the ethical development of AI.
Pros and Cons of AI Regulation
OSV News: What are the potential impacts of this decree, both positive and negative?
Taylor Black: The executive order raises a question that we have grappled with since the early days of AI governance: what is the appropriate locus of regulatory authority over technologies that, by their nature, are borderless?
There are legitimate arguments on both sides here.
A unified national framework could provide the clarity and consistency that responsible developers – especially small businesses and startups – really need. The current patchwork of state laws creates compliance challenges, and there is a real risk that well-intentioned but technically ill-informed regulations will inadvertently stifle beneficial innovation. This is not an abstract concern. I saw it with my own eyes.
But we have to be honest about what we trade. States have always served as laboratories for democracy, and in the field of AI, some of the most thoughtful regulatory efforts have emerged at the state level, precisely because state legislators are often closer to the communities experiencing the real-world effects of AI. Colorado’s algorithmic discrimination law, which the executive order specifically criticizes, represents an attempt to address documented harms – harms that communities of color, low-income families, and other marginalized groups currently experience, not hypothetically.
A Catholic View on AI Regulation
The order positions “innovation” and “responsible oversight” as fundamentally in tension. But this is precisely the false dichotomy that Pope Leo XIV addressed in his message to the (November 6-7, 2025, Rome). Builder AI Forum: “The question is not only what AI can do, but also who we become through the technologies we build. » This framing is important. This moves us from a purely utilitarian calculation to something more fundamental: a question about human identity and its flourishing.
Catholic social teaching does not ask us to choose between human flourishing and economic dynamism: it insists that authentic development must include both. As the Holy Father reminded Forum participants, “technological innovation can be a form of participation in the divine act of creation.” But precisely because of this creative participation, “it has ethical and spiritual weight, because each design choice expresses a vision of humanity.”
The question is not whether we regulate, but whether we regulate wisely, in ways that protect human dignity while allowing true creativity to flourish.
Tension between national law and local enforcement, technological freedom and security in AI regulation
OSV News: A major ethical concern is the protection of children. The administration said the framework will take this into account. Are there plausible problems if states are unable to engage in “local” regulation and enforcement?
Black: The administration is committed to ensuring that child safety protections remain intact, and Section 8(b)(i) explicitly exempts state laws relating to child safety from preemption. This is important and I take it seriously.
But here’s what keeps me up at night: enforcement ability.
State attorneys general have been at the forefront of efforts to protect children in the digital space. They understand their communities. They can move quickly. They built relationships with local schools, parents, law enforcement and advocacy organizations. A national framework, no matter how well intentioned, cannot replicate this granular relational capacity.
Online child exploitation is not an abstract political debate; it’s happening in real time, at scale, and perpetrators are adapting faster than centralized regulators can respond. The platforms themselves have acknowledged – sometimes under legal pressure – that their own security teams are overwhelmed.
We cannot afford to experiment with jurisdictional reshuffle while waiting to see what a national framework will look like. The principle of subsidiarity – that cases should be handled by the smallest competent authority – suggests that states should retain meaningful enforcement capacity, not just formal legislative language.
Any national framework must include robust funding for state enforcement, clear mechanisms for state attorneys general to act on child safety issues without federal preemption deadlines, and explicit sunset provisions that restore state authority if federal enforcement proves inadequate.
OSV News: “Big Tech” sometimes resists regulations deemed harmful to innovation. Yet the dangers of how AI already exploits humans are documented. What are your thoughts on how this can or will be balanced within a national political framework, particularly given the concerns expressed by the Vatican and Pope Leo?
Black: The argument that the industry makes — and I say this as someone who has worked in the industry for years — is that regulation stifles innovation. There is a version of this argument that is correct: poorly designed regulations, written without technical understanding, can create perverse incentives and real costs without corresponding benefits.
But there is also a version of this argument that is simply a demand for impunity. And we’ve seen where this leads.
The exploitation that we documented at the (Faculty of Law of the Catholic University of November 14 “Social Responsibility of Big Tech Companies”) conference – sexual extortion, forced labor, algorithmic discrimination – is not hypothetical. These are current harms, affecting real people, often facilitated by systems deployed with minimal oversight because “moving fast” was seen as an unqualified good.
What could balanced AI regulation include?
Here is what I believe a balanced national framework should include:
First, transparency requirements that allow independent researchers, civil society and affected communities to understand how the systems work, without requiring the disclosure of truly proprietary technical details.
Second, meaningful accountability mechanisms that assign responsibility when AI systems cause harm. Blanket immunity benefits no one except bad actors.
Third, invest in training, and not just training. We need engineers, executives, and policymakers trained in moral traditions who can help them address these questions – not just trained in technical compliance.
Fourth, continued engagement with the most affected communities. THE Rome Call for AI Ethics – signed by the Vatican, Microsoft, IBM and others – calls for inclusion. This means that the people governed by these systems should have a say in how they are designed and deployed. It also means building the shared infrastructure that allows institutions, particularly Catholic institutions, to act in intelligent, generative ways rather than remaining passive recipients of technology built by others.
Fifth, explicitly recognize that “innovation,” divorced from ethical responsibility, does not constitute authentic development.
The Holy Father has been clear about what happens when we get it wrong. In his first interview as pope, he warned that “the danger is that the digital world will go its own way and we will become pawns, or we will be pushed aside.” He noted that “extremely wealthy” people are investing in AI while “totally ignoring the value of human beings and humanity.”
Catholic Social Teaching: An Ethical and Human-Centered Framework for Addressing the Use and Regulation of AI
The vision of the Church is not Luddite resistance but something more radical: technology ordered to the integral development of the human person. And achieving this vision will require shared infrastructure – open, interoperable, governed by the communities it serves – not fragmented vendor stacks and ad hoc technical decisions. This is precisely the kind of federated, ecosystem-wide thinking that must inform any national framework.
OSV News: Is there anything else you would like to add?
Black: I don’t approach this question as an opponent of the technology industry or as a skeptic of innovation. I have spent my career developing and investing in emerging technologies because I believe in their potential to truly benefit humanity.
But I also believe that Catholic social teaching offers us something that is often missing from the current debate: a framework that begins not with systems or markets, but with the human person, created in the image and likeness of God, endowed with a dignity that no algorithm can confer or remove.
The Church cannot be satisfied with marginal criticism. We must build – companies, infrastructures, training courses – which embody the vision we express. Pope Leo XIV is right: this must be “a profoundly ecclesial enterprise”. And this effort requires institutional architecture, not just academic programs.
Pope Leo
This vision – AI as a sign of hope for the entire human family – should be the standard against which we measure any national framework.
The question before us is not simply “How can we win the AI race?” » The question is: “What kind of society are we building and who is being left behind?” »
Kimberley Heatherington is a correspondent for OSV News. She writes from Virginia.
