Back to news

Scaling Agentic AI: 5 Steps To Bridge The Execution Gap

In this Forbes Technology Council piece, Darrow's VP of Product & Tech Gaby Prechner tackles one of the defining challenges of the current AI moment: why so few companies are actually scaling agentic AI despite near-universal interest in it — and what leaders need to do differently.

Prechner opens with a striking gap. AI agents are an $8.5 billion market in 2026, with Deloitte projecting 5x growth by 2030. IBM predicts an 8x increase in AI-enabled workflows this year alone, and 7 in 10 executives believe agents will reinvent how work gets done. And yet, while nearly two-thirds of companies are exploring AI agents, no more than 10% are actually scaling them. Fewer than half of those adopting agents are fundamentally rethinking how work gets done or redesigning processes to incorporate them. Poor-quality AI work is meanwhile costing companies $9 million per 10,000 employees annually. The problem, Prechner argues, is not the technology. It is how organizations are approaching it.

The core distinction the piece draws is between linear improvement and transformative value. Most organizations bolt AI tools onto existing processes and measure success in time saved per task. That is linear improvement. Transformative value is something different: it changes both the unit of value an organization produces and how work is organized. Using legal research as an example, Prechner illustrates what this looks like in practice. In a linear model, a researcher poses a question, the AI answers it faster, and the organization measures the time saved. In a truly agentic model, research is no longer conducted on a one-off basis — AI agents continuously explore hypotheses, human researchers orchestrate and validate their outputs, and leaders make strategic decisions based on a steady stream of insights. Headcount no longer limits scalability. Fewer handoffs. New roles emerge. The field of possibility expands.

To help leaders close this gap deliberately rather than accidentally, Prechner draws on two innovation frameworks. The first is Systemic Inventive Thinking, specifically its Subtraction pattern, which asks teams to remove core components of a system and design as if those components cannot exist. The goal is to expose which parts of a process create genuine value and which merely compensate for broken parts elsewhere — the latter being the areas most ripe for agentic redesign. The second framework is Zero-Based Thinking, which asks leaders to design a function from scratch as if they were building it today with agentic AI available, independent of legacy systems and inherited assumptions.

Prechner then synthesizes both frameworks into a practical five-step process: define the unit of value you want the system to produce; identify your true constraints including regulation, risk tolerance, and quality thresholds; apply zero-based thinking to design an ideal agent-native system; gut-check that design using subtraction to test what is achievable today; and iterate forward until the real system resembles the ideal as closely as possible.

The piece closes with a clear call to action: transformation needs to be structured and repeatable, not accidental. Organizations that apply this discipline to agentic AI adoption will shift from incremental efficiency gains to scalable, compounding growth.