By McGregor McCance
As investments in artificial intelligence (AI) continue to increase, a critical element is not being adequately addressed, increasing risks for people, businesses and society. University of Virginia President Scott Beardsley believes much more attention needs to be paid to ethics as they apply to AI, in theory and practice.
“The good thing is that the tools, frameworks and conceptual clarity for ethical AI exist and are advancing rapidly,” said Beardsley, a former Darden dean. “What’s wrong is implementation. Many companies still view ethics as optional, while structural risks such as bias, opacity and concentration of power remain entrenched.”
Time is running out to make a significant difference.
The next five years, Beardsley said, will determine whether ethics will be built into the infrastructure — or whether it will be applied too late and at greater cost.
Darden is leading efforts to implement ethical AI. The topic has also become a central focus for Darden in the classroom, in research and in thought leadership that helps businesses thrive.
Darden’s LaCross Institute for Ethical Artificial Intelligence in Businesslaunching in 2024, provides a nexus for creating and teaching AI-related knowledge at Darden and the University of Virginia.
Here are several key questions that underscore the urgency of the AI ethical challenge, identified from research and studies conducted by the LaCross Institute.
Why are AI ethics currently at a critical inflection point?
Technology evolves faster than governance or safeguards can keep up. AI is already shaping people’s lives, the harms are real, regulation is overdue, and adoption is accelerating. Decisions made today will shape how AI is integrated into society for decades.
Ethics cannot be reinforced later. Waiting for AI to be fully integrated into critical systems to correct bias, opacity or governance failures will be like installing seat belts once cars are already on the road. The next five years represent a window of opportunity to embed ethical frameworks – before risks become locked in and irreversible.
The United Nations 2030 AI Ethical Agenda views the next five years as a window of opportunity: close enough to require immediate action, but long enough to implement structural safeguards.
What factors have contributed to the current situation?
A culture of “act fast and fix later” may work in consumer technology, but it is dangerous when applied to AI systems that determine creditworthiness or medical treatment. Once these systems are deployed, adding ethics after the fact is slower, more expensive and more difficult to enforce. By 2030, AI will be so integrated into business and government infrastructure that it will be almost impossible to modernize ethical standards.
Regulatory frameworks are fragmented and lagging. The EU AI law, which comes into full force in 2026, represents the first comprehensive regulatory regime. Elsewhere, the landscape is uneven: the United States has only partial guidance, while countries like Brazil, South Africa and Indonesia are still developing policies. AI is global, but the rules are national.
What is the difference between AI ethics and ethical AI?
Although they are related, they describe two perspectives: one theoretical, the other practical.
AI ethics is the academic and philosophical study of the moral, social and political issues raised by artificial intelligence. He is interested in principles, frameworks and normative debates. It answers the question: what should we do?
“AI ethics is the academic and philosophical study of the moral, social and political issues raised by artificial intelligence. She is interested in principles, frameworks and normative debates. It answers the question: what should we do?
Ethical AI, on the other hand, refers to the practical implementation of these principles in the design, development and deployment of AI systems. This is about ensuring that AI behaves in a useful, honest and harmless way, not only in results, but throughout the development cycle. It answers the question: how can we actually proceed?
AI ethics without ethical AI is toothless. Ethical AI without AI ethics is aimless. Both are necessary. The current imbalance – strong rhetoric on ethics, weaker focus on practice – is what makes this time particularly risky.
How is Darden approaching ethical AI?
The LaCross Institute views ethical AI as a value chain – a set of end-to-end activities in which ethics must be designed and continuously verified. In this model, there are five interconnected stages:
- Infrastructure — computing, cloud, networks and their environmental footprint
- Measurements and data — research, prepare and manage data
- Models and training — choice of architecture, tuning and optimization
- Applications and implementation — deployment in real workflows
- Management and monitoring results — continuous monitoring and impact analysis
Each step creates distinct value opportunities and ethical risks that require built-in controls and accountability from the start. The value chain operationalizes ethics. It comes down to “being ethical”: who does what, when and with what evidence. It’s the difference between ambitious principles and repeatable management practices – and it’s how leaders make ethics part of AI’s ROI, not an added cost. LaCross Institute Director Marc Ruggiono is the lead author of an institute white paper on the ethical AI value chain that will be released in early 2026.
Too often, AI ethics have been treated as an afterthought rather than a fundamental design principle. Illustration by Daniel Liévano
Are AI ethics an afterthought for many companies or organizations?
Too often, AI ethics have been treated as an afterthought rather than a fundamental design principle. Organizations may adhere to broad “ethical principles,” but when it comes to creating or deploying AI, ethics are incorporated late in the process, if at all.
When ethics is left to the end, it is always the weakest link. Businesses find themselves reacting to scandals instead of building trust and resilience.
Are competitive pressures pushing companies to rush AI implementation?
Organizations often feel pressure from investors, boards of directors, or competitors to quickly deploy AI products. When AI is rushed, small errors can lead to systemic damage. For example, biased data sets can lead to discriminatory lending or hiring practices, which can then ripple across all markets. Tesla’s Autopilot illustrates how the push for a rapid launch created gaps between what the system could do and how users perceived it, leading to accidents and regulatory scrutiny.
Speed can provide a temporary competitive advantage, but it often backfires. Flawed launches damage consumer confidence, lead to lawsuits, and invite regulatory crackdowns. This creates reputational damage that outweighs the initial gains. Companies seeking speed without guarantees are gambling with trust, compliance and long-term sustainability.
Is there a competitive advantage for a company to be more interested in ethical AI?
Transparent and fair companies build trust and brand loyalty. As the LaCross Institute points out, “useful, honest and harmless” AI is not a barrier to innovation but a foundation for sustainable growth.
Companies that integrate ethics into their AI strategy gain a dual benefit: they mitigate risk while building trust as an engine of growth. Ethical AI is moving from a cost center to a strategic asset. Companies that understand this early will be better positioned for the next decade.
Where will leadership on these issues come from?
From those who design, purchase, deploy, assure and audit AI – particularly large enterprises, standards bodies, multi-stakeholder consortia, academia and civil society. In the short term, these actors can move faster than legislation and shape standards through public procurement, standard adoption and market discipline.
How does AI affect MBAs?
AI automates elements of analysis and content creation, but the management work that MBAs are trained to do – defining problems, balancing trade-offs, managing risks, and orchestrating cross-functional execution – is becoming increasingly important as AI evolves.
“AI automates some analysis, but it increases the need for leaders who can design reliable, fair and verifiable systems in production. The MBA remains relevant by becoming the degree that teaches how to manage AI business”
Rather than replacing MBAs, AI creates management roles: AI product owner, model risk manager, AI procurement manager, AI manager, and director of data governance. These roles reward graduates who can connect technical teams, legal and compliance functions, and profit and loss leaders using shared frameworks and measurable controls.
AI automates some analysis, but it increases the need for leaders who can design systems that are reliable, fair, and verifiable in production. The MBA remains relevant by becoming the degree that teaches how to manage the business of AI or manage AI as a business function.
What does the LaCross Institute do differently from other AI-focused academic institutions?
The LaCross Institute is distinguished by its operational and managerial orientation, distinct from theoretical ethics centers and purely technical AI laboratories.
It treats AI ethics as a leadership-driven, operational discipline rooted in research, education, and practitioner engagement. Through robust funding, a value chain management framework, ambitious academic programs, and university-wide collaboration, it provides business leaders with concrete tools to govern AI ethically and effectively.
