Four tensions organizations face when using agentic AI:
- Scalability versus adaptability. Constraining agent systems too much limits their effectiveness, while granting too much freedom can introduce unpredictability.
- Experience versus opportunity. Agentic AI is forcing organizations to rethink how they evaluate cost, time, and ROI.
- Supervision versus autonomy. Excessive supervision of agentic AI systems can negate the benefits of autonomy, while insufficient supervision exposes organizations to risk.
- Modernization versus reengineering. Organizations must decide whether to quickly integrate agentic AI into existing workflows or take the time to completely reimagine those workflows.
A growing number of organizations are no longer just experimenting with artificial intelligence: they are starting to delegate work to it. According to a new report from the MIT Sloan Management Review and the Boston Consulting Group, “The Emerging Agentic Enterprise: How Leaders Should Navigate a New Era of AI“, more than a third of companies surveyed are already deploying agentic AI systems that can plan, act and learn autonomously, and 44% of them plan to do so. These systems don’t just help employees and automate isolated tasks; they pursue goals, coordinate across workflows and adapt based on results.
This shift marks a fundamental shift in how work is done and how leaders should think about AI, according to the report. As agentic systems are viewed less as tools and more as teammates, organizations face a new set of strategic choices around control, investment, governance, and design.
The report, based on a global survey of more than 2,000 people, presents these choices as four fundamental tensions that leaders must manage if they hope to realize the value of agentic AI without introducing new risks.
1. The tension of flexibility: scalability versus adaptability
Traditional automation succeeds at scale, performing predefined tasks faster and at lower cost. Agentic AI, on the other hand, sits between tools and human workers, deriving much of its value from adaptability – how it responds to changing conditions and learns over time.
Leaders face a trade-off, the report says. Constraining agent systems too tightly can limit their effectiveness, but granting too much freedom can introduce unpredictability. Organizations best positioned to benefit from agentic AI are those that design processes for both scale and learning, treating adaptability as a strategic capability, not a by-product.
2. The investment tension: experience versus opportunity
Agentic AI also requires organizations to rethink how they evaluate costs, time and return on investment, the report’s authors write. Unlike traditional tools that depreciate predictably or workers whose value steadily increases with experience, agent systems simultaneously depreciate due to model drift and appreciate through continuous learning and emerging capabilities.
This creates new investment tensions around when and how to invest in rapidly evolving technology. Conventional financial models struggle to capture these dynamics, often leading organizations to undervalue long-term compound returns. As a result, the researchers found, companies that rely on traditional investment frameworks risk underinvesting in learning and adaptation, while those that adopt hybrid investment models and diversified AI portfolios are better positioned to capture sustainable value.
3. The tension of control: supervision versus autonomy
Because agentic AI systems can act independently while behaving unpredictably, managing them is complex. Excessive supervision can negate the benefits of autonomy, while insufficient oversight can expose organizations to operational, compliance and reputational risks, according to the report.
Agentic AI should be managed more like a human colleague than a traditional tool, the researchers write. This requires governance models that define when systems can act autonomously and when human intervention is required. Rather than relying on static controls, leaders must develop dynamic, risk-based monitoring mechanisms that adjust based on context, performance and learning.
Leading the AI-powered organization
In person at MIT Sloan
Register now
4. The tension of the scope: renovation versus reengineering
Finally, organizations must decide whether to integrate agentic AI into existing workflows to achieve quick, incremental wins or to completely reinvent those workflows. Many early deployments layer agent capabilities on top of existing processes because they require less time and money and provide faster results than removing existing systems.
The biggest gains, however, come when leaders rethink work from first principles: rethinking processes around hybrid teams of humans and AI agents. The report notes that while this approach requires greater initial investment and longer lead times, it can unlock new operating models and sources of competitive advantage. Leaders must weigh these benefits against the risk that rapid advances in AI technology will outpace long-term efforts.
The importance of leadership
Overall, enthusiasm for agentic AI is ahead of organizational readiness, according to the report. Many companies deploy these systems without fully addressing the strategic tensions they introduce, particularly around governance, talent and accountability.
To succeed, organizations must understand that agentic AI is not just a technology upgrade: it represents a management inflection point. Leaders who treat it as such by investing not only in systems but also in structures, skills and strategy will be better positioned to navigate the emerging agentic enterprise.
Read the report: “The emerging agentic enterprise”
This article is based on the 2025 Artificial Intelligence and Business Strategy Report from MIT Loan Management Review And Boston Consulting Group.
The authors of the report are Sam Ransbotham, David Kiron, Shervin Khodabandeh, Sesh Iyer and Amartya Das. Sam Ransbotham East professor of analysis at Boston College, as well as guest editor for MIT Sloan Management Review Big Ideas initiative in artificial intelligence and business strategy. David Kiron is the editorial director of MIT SMR research and program manager for its Big Ideas research initiatives. Shervin Khodabandeh is a Managing Director and Senior Partner at BCG, co-leader of its North America AI business and leader of BCG X. Sesh Iyer is Managing Director and Senior Partner at BCG and Chair of BCG North America X, where he helps clients drive large-scale AI transformations. Amartya Das is Director of BCG and is currently an Ambassador at the BCG Henderson Institute, where he leads research on the impact of technology and AI on society.
