How I think about enterprise AI
It's not just about the agents.
It's about everything around them.
Building an AI agent takes a weekend. Deploying it across an enterprise takes the right strategy. This is the lens I bring to any AI initiative — the full picture many companies miss.
Where most companies actually are
There's a maturity curve from chatbots to agents — and most organizations are earlier on it than they think. Each stage requires different infrastructure, governance, and organizational readiness. I've seen this firsthand across enterprise rollouts.
Chatbots
Passive Q&A. User asks, bot answers from a script or knowledge base. Limited context, no real decision-making.
Most companies are here
Automated Workflows
AI handles defined tasks end-to-end — document processing, routing, extraction. Rigid but useful. Breaks when the process changes.
Growing adoption
Agent-Powered Operations
Semi-autonomous agents that reason, access company data, take actions, use tools, and coordinate with other agents. Requires mature infrastructure.
Where the value is
The real stack
AI isn't a layer you bolt on. It's a full stack — and every layer has to work for the whole thing to deliver. Most "AI strategies" only address two or three of these. I think about all of them.
Governance & Security
Access controls, compliance, audit trails, responsible AI policies
People & Adoption
Change management, training, trust-building, role evolution
Workflows & Processes
Process mapping, automation design, cross-functional coordination
Models & Agents
Agent design & build, orchestration, guardrails, evaluation
Data & Integration
Data quality, system connectors, API strategy, breaking down silos
Infrastructure
Cloud, compute, networking, identity, foundational platform
Every layer depends on the ones below it. Skip one and the whole thing is fragile.
Data silos & context walls
The friction in enterprise AI isn't in the intelligence of the models — it's at the boundaries between your systems and people. Martech doesn't talk to ERP. Support tickets live in one tool, product feedback in another. Knowledge is trapped in spreadsheets, email threads, and shared drives.
Agents need grounding in real company data to be most effective. If your data is fragmented, your AI will be too — giving partial answers, missing context, or hallucinating because it can't see the full picture. I've navigated this across every enterprise platform I've built.
What agents need access to
Docs
CRM
ERP
Chat
Calendar
Disconnected systems = disconnected AI
Fragmented processes
Before you automate a workflow, you have to understand it. Many companies have undocumented, inconsistent processes — different teams doing the same thing three different ways. AI can amplify whatever it finds, including the mess. I always start by mapping before building.
The naive approach
"Let's build an agent for our onboarding process." But which version? Sales does it one way, CS does it another, and EMEA has their own spreadsheet.
The real approach
Map the process first. Identify variations, exceptions, and handoffs. Standardize where possible. Then design the agent around how work actually gets done.
The people problem
This variable can sink more AI rollouts than bad technology. People are afraid — of being replaced, of looking incompetent, of trusting a system they don't understand. And they're not wrong to be cautious. I've led teams through post-acquisition integrations and org transformations — the human side of change is always the hardest part.
Fear of replacement
"If this agent can do my job, what happens to me?" Address it directly, honestly — reframe roles around higher-value work, not elimination (if true).
Lack of trust
"How do I know this is right?" Start with human-in-the-loop. Let people verify, correct, and build confidence before you remove guardrails.
Change resistance
"We've always done it this way." Change management isn't a phase — it's ongoing. Training, champions, quick wins, and visible leadership support.
Governance, security & compliance
Every enterprise agent that touches company data raises real questions. Who can it access? What can it do? Where do the outputs go? These aren't afterthoughts — they're prerequisites. I've built PCI L1-compliant platforms and led enterprise security reviews. This is familiar territory.
Data access boundaries
Define what each agent can see and do. Role-based access isn't optional when AI is acting on behalf of your people.
Audit trails
Every action an agent takes should be logged, traceable, and reviewable. Black-box automation doesn't fly in regulated industries.
Compliance & sovereignty
Where does your data go? Who processes it? Industry regulations and data residency requirements don't disappear because you're using AI.
Centralized management
One dashboard to see all agents, their permissions, usage, and performance. Shadow AI is a governance nightmare waiting to happen.
Measuring what matters
ROI isn't "we deployed 10 agents." It's whether those agents actually changed outcomes. I set clear metrics from day one — and honest evaluation throughout. Same approach I used scaling revenue orgs: if we can't measure it, we can't manage it.
Vanity metrics
- ✕ Number of agents deployed
- ✕ API calls per month
- ✕ "AI-powered" features shipped
- ✕ Executive demo impressions
Real metrics
- ✓ Adoption rate by target users
- ✓ Time saved per workflow
- ✓ Error reduction / quality improvement
- ✓ Revenue or cost impact (actual $)
This is how I think.
Systems thinking applied to AI — the same way I've always approached building organizations. Data, processes, people, governance, architecture. Not just the tech.