For two decades, Danilo McGarry has been the operator behind some of the largest AI programs in regulated industries. He led AI at Citigroup and UnitedHealth Group (America’s 5th largest company), then helped drive Alter Domus (a business services company) from a 900 million euro valuation to 4.9 billion euros in four years by executing a workflow-first, federated operating model tied to hard KPIs and governance with real authority. His methods have delivered over USD 2 billion in measurable value and included running an automation estate of 3,500 digital workers (known as the largest digital worker estate in the world til today).
His approach is now taught in over 100 universities worldwide, studied by the Big Four, and used to train senior partners who want to scale AI beyond pilots. The United Nations has invited McGarry to share these frameworks so others in the private and public sector can learn to also drive more tangible benefits from Ai & Digital Transformation programs. In a market crowded with promises, his approach to scaling AI for tangible, auditable results is one of the few that consistently works today. Today Danilo is a senior advisor to several companies such as Kaplan, Cocacola, Quadient, CIPD, Standard Chartered, Portage Point Partners and many more. Danilo is on a mission to show the world that when Ai is done right, it can truly transform.
Q: What is your formula to scale AI beyond pilots?
A: I start with process, not models. We remove waste, standardize inputs, and make decision points explicit. Then we run everything through a workflow engine. That is where humans, systems, and agents are orchestrated with audit and rollback. I use a federated operating model. Business units build where the pain is. A central enablement core sets platforms, security, patterns, and spend guardrails. Finally, governance with teeth. Stage gates, kill-switch authority, hurdle rates, and portfolio KPIs. Do it in that order and pilots grow into compounding value.
Q: How do you choose where to start?
A: I score work on four things. Volume, variance, verifiability, and value. I want high throughput, low avoidable variance, a clear source of truth, and a direct line to a board-level lever like margin or retention. I pick one or two anchors with named owners and clean data access. I also set stop criteria on day one. If the evidence is not there by a checkpoint, we pivot or end it. That protects capital and keeps trust high.
Q: What does process first look like in practice?
A: We map target state, not today’s pain. We remove rework and handoffs. We define control points where people must confirm, override, or escalate. We turn those into rules the workflow will enforce later. Only after that do we pick models and tools. Technology should amplify a good design. It should not harden a bad one.
Q: How do you redesign human plus AI roles?
A: I split by strengths. AI does retrieval, drafting, reconciliation, and monitoring. Humans own judgment, edge cases, customer moments, and accountability. We update job descriptions, spans of control, and KPIs. Adoption becomes part of performance, not a suggestion. We also make escalation paths clear. That stops shadow processes and builds confidence.
Q: Why is a workflow engine non-negotiable?
A: Because scale needs orchestration. The engine assigns tasks, enforces SLAs, and logs decisions. It generates the training data that improves models safely. It wires in thresholds, alerts, and rollback. It gives leaders a live view of throughput and value. Agents run inside that structure. Not free range on chat.
Q: Centralized versus federated operating model?
A: Federated delivery with a strong core wins. Functions ship faster because they know the work. The core team provides identity, data governance, platforms, pattern libraries, red teaming, and spend guardrails. That balance stops endless POCs and turf wars. It also turns reuse into a habit. People can find approved patterns that already work.
Q: What does effective governance look like?
A: Small and empowered. Chaired from the business. One-page stage gates. Pattern catalogs. Risk checklists. Red teaming before exposure to customers. Post-deploy benefit tracking. The council owns the kill switch. It publishes decisions and rationales. That transparency lowers risk and speeds approvals over time.
Q: What KPIs and evidence should leadership demand?
A: Quality, cost, flow, and business outcome. First-time-right and error escape for quality. Unit economics per transaction after redesign for cost. Throughput and cycle time versus baseline for flow. A board KPI like revenue, retention, or margin for outcome. Put adoption metrics in executive objectives and bonuses. Report at portfolio level with hurdle rates and stop criteria. That is how capital flows to what works.
Q: Build versus buy, without hype?
A: Build the parts that create memory and control. Governance, data engineering, and product ownership. Buy commodity models, utilities, and accelerators unless they are your moat. Use open standards so switching stays possible. Outsource peak effort, never decisions or risk.
Q: What is the right data strategy?
A: Retrieval over memorization as the default. Curate trusted sources. Maintain lineage and permissions. Use event flows and APIs so data is fresh and consistent. Only train bespoke models when it moves a core value lever and you can fund the lifecycle. Treat metadata as a first-class asset. Audits become fast and trust goes up.
Q: Why treat monitoring as a product?
A: Because risk and value move. Monitoring covers bias, drift, safety triggers, operational health, and business KPIs in one view. It has a named owner with authority to act. It includes thresholds, alerts, runbooks, and rollback. It gets maintained like any other product. If no one owns the monitor, no one owns the risk.
Q: How should CFOs fund AI properly?
A: Agree what can be capitalized for platforms and core data. Expense modernization cleanly. Use a snowball model where realized savings and value gains fund the next phase. Track benefits at portfolio level and reinvest against hurdle rates. Scale becomes a financial mechanism, not a hope.
Q: How do you align culture and incentives?
A: Put three AI KPIs into every executive and business unit leader’s objectives. Reward measurable adoption, uplift, and risk controls. Remove blockers in the open. Celebrate shipped outcomes. Make change visible in daily work so people feel the friction drop. That is what sustains momentum.
Q: What does real reskilling look like beyond copilot licenses?
A: Role based and tied to target workflows. Operators learn prompt hygiene, data awareness, and exception handling. Managers learn product ownership, backlog design, and KPI instrumentation. Leaders learn governance, portfolio thinking, and scenario drills. We measure skill uptake through production metrics, not classroom hours.
Q: When do you say no to a deployment?
A: When there is no measurable problem statement and no stop criteria. When there is no named owner for monitoring. When data risk or lineage is unresolved. When rollback is missing or untested. Saying no early saves credibility. It protects the budget for the work that can scale.
Q: Can you share a representative outcome without short-term promises?
A: A professional services firm is a good example. We re-engineered reconciliation and validation. We embedded AI inside a workflow engine. We consolidated monitoring. Exceptions became auditable events. Capacity moved to higher margin work. Compliance exposure dropped. The gains compounded across phases because process discipline, governance, and reuse were in place.
Q: How do mid-market companies apply this without big teams?
A: Start with one high volume workflow you control end to end. Stand up a lightweight council with real authority and a public checklist. Pick one platform stack. Use a small pattern library for prompts, flows, and tests. Instrument monitoring from day one so each phase teaches you what to improve next. Fund it with a snowball so momentum is built into the economics.
Q: What makes your approach different?
A: I am operator first. Process before models so you do not automate waste. Workflow before agents so scale is safe. Federated delivery under one orchestrator so local speed meets enterprise control. Governance with stage gates and a kill switch so value and reputation are protected. Portfolio KPIs tied to compensation so adoption is not optional. That formula is one of the key ways we were able to lift Alter Domus from 900 million to 4.9 billion euros in four years and why universities and consulting partners are so keen to apply this to how they operate today.
Q: What is your mission going forward?
A: Keep proving that scale is an operating system, not a slide. I’m focused on three things. First, teaching the formula to operators who have to make it work day to day—through hands-on advisory with boards and functional leaders, and through keynotes and executive workshops where we codify process-first, workflow-led, federated delivery with real governance. Second, building capacity at scale partnering with universities and training programs so the playbooks live beyond me, and continuing to upskill senior consulting partners who want practitioner-level execution, not theory. Third, open evidence publishing checklists, pattern libraries, and on-record case studies so others can replicate my results.
I’m also using my podcast, It’s All About AI by Danilo McGarry on YouTube, to democratize the know-how. After 13 episodes it has attracted roughly a quarter-million subscribers and several million views, and the format lets me bring in operators from different industries to dissect what actually works. It’s a direct way to give back the lessons from the last 20 years while helping more teams turn AI from pilots into measurable outcomes.
Not bad for a kid that grew up without access to a phone or technology until after the age of 10 years old.
To get in contact with Danilo McGarry access his website through www.danilomcgarry.com