Most mid-market companies approach AI like they're planning a kitchen renovation — they draw up a 12-month plan, hire a contractor, and wait. Enterprise companies can absorb that pace. Mid-market companies can't. By the time the plan is finished, the landscape has shifted, the budget has been reprogrammed, and the leadership sponsor has moved on.

The companies I've watched move fastest on AI share one trait: they think in 90-day cycles, not annual programs. Not because 90 days is some magic number, but because 90 days is long enough to produce real evidence and short enough to course-correct before a mistake becomes a strategy.

This is the framework I use when I sit down with a mid-market leadership team that has decided to move on AI and wants a concrete starting point. It won't cover everything. But it will get you from "we should be doing more with AI" to a running pilot with documented learnings — before your next board meeting.

"The companies that move fastest on AI don't have better plans. They have shorter feedback loops."

One prerequisite: before using this framework, run the AI Readiness Assessment. If you score below 5, the 90 days should start with closing those gaps — not with the roadmap below. A roadmap built on misaligned leadership or dirty data doesn't become a plan. It becomes an expensive learning experience.

Month 1: Audit & Align

Month 1 has one job: identify where AI can actually move the needle for your business, and get leadership aligned before any money moves. It sounds obvious. It is almost never done well.

Most companies skip the audit and go straight to vendor conversations. Six months later, they have a vendor relationship and no clarity on what they were trying to accomplish. The audit is what prevents that.

01
Month 1 · Weeks 1–4
Audit & Align

The goal is a ranked shortlist of AI opportunities tied to specific business outcomes — and executive alignment on which one to pursue first. Not a list of ideas. A prioritized decision.

1
Map your current workflows for AI surface area
Walk each major business process end to end. Where are humans doing work that is repetitive, data-driven, and low-stakes? Where are decisions made on incomplete information that AI could improve? Where does speed matter more than perfection? You're looking for surface area — places where AI has something to grip.
2
Score opportunities against three filters
For each opportunity, ask: Does it connect to a measurable business outcome in the next 90 days? Do we have reasonably clean data to support it? Can we build a feedback loop to know if it's working? Opportunities that pass all three filters go on the shortlist. Those that fail one or more get deprioritized — not killed, but not first.
3
Run an executive alignment session
Present the shortlist to the leadership team. The goal is not consensus — it's clarity. Which opportunity does leadership agree has the highest impact? Who owns it? What does success look like in 90 days, specifically? Get these answers on paper before Month 2 starts. Undocumented alignment is not alignment.
4
Assess data readiness for your #1 priority
Before you commit to building anything, validate that the data you need exists, is accessible, and is reasonably accurate. This is not a data engineering project — it's a 3–5 day audit of what you have. If the data isn't there, the pilot scope has to change. Better to know now than in week 6.
Month 1 Deliverable

A one-page AI initiative brief: the specific business problem, the measurable success metric, the data available, the executive owner, and the 60-day pilot scope. If you can't write this in one page, you haven't aligned yet.

Month 2: Pilot & Learn

Month 2 is where most companies want to start — and why most companies fail. Without Month 1, you run a pilot against a hypothesis no one fully agreed on, using data you didn't validate, with success criteria that shift mid-flight.

With Month 1 done, Month 2 is straightforward: build the smallest thing that can produce real evidence. Not a proof-of-concept. Not a demo. A pilot that runs on production data, touches real workflows, and produces measurable output.

02
Month 2 · Weeks 5–8
Pilot & Learn

The goal is a running AI pilot that produces real output against real data — with a documented baseline so you know whether it's working. Scope ruthlessly. The pilot should solve one problem in one workflow, not three problems across two teams.

1
Define the baseline before touching anything
Document the current state of the workflow the AI will affect. How long does it take? What's the error rate? What's the cost? What's the output quality? You cannot measure improvement without a baseline. This takes 2–3 days and saves you from a very uncomfortable conversation in Month 3.
2
Build the minimum viable pilot
Scope to the smallest intervention that can produce evidence. If you're piloting AI for contract review, don't build the full review system — build the piece that flags the three highest-risk clause types. Run it on 30 days of historical contracts first, then on live contracts with human sign-off. Every week of scope you cut in week 5 saves two weeks of debugging in week 7.
3
Run the pilot with a real user group
Pilots that run in isolation prove nothing. Identify 3–5 people who actually do the workflow the AI is supposed to improve. Run the pilot with them on their real work. Collect structured feedback weekly: what's working, what's wrong, what would make them use this daily. User feedback in this phase is more valuable than any metric.
4
Measure against baseline weekly, not monthly
Compare pilot output to baseline every week. You're looking for signal — is performance moving in the right direction? You don't need statistical significance at this stage, but you do need a direction. If the pilot is producing worse outcomes than the baseline after two weeks, that's a signal to investigate, not to wait out.
Month 2 Deliverable

A pilot results document: baseline vs. pilot metrics, user feedback summary, what worked and what didn't, and a recommended next step. This is the input to Month 3's scale decision — not a slide deck, not a success story. An honest assessment.

Month 3: Scale & Systematize

Month 3 is where the decision gets made: does this scale, and how? Most companies treat this as the "now we do more of it" phase. It's actually the "now we decide if we should" phase.

Scaling a pilot that doesn't work is expensive. Scaling a pilot that works without a governance structure is also expensive — just slower. Month 3 does both: it makes the scale decision with evidence, and it builds the minimum governance structure to sustain whatever gets scaled.

03
Month 3 · Weeks 9–12
Scale & Systematize

The goal is a production deployment with documented operating procedures and a 90-day review cadence — or a documented decision to pivot scope based on pilot learnings. Either outcome is valid. The only invalid outcome is inaction.

1
Make the scale decision with leadership
Present the Month 2 results to the executive team. Three questions need answers: Did the pilot produce evidence that this approach works? Is the team willing to expand it? What changes before we scale? This is a decision meeting, not a status update. Come with a recommendation, not just data.
2
Define the operating model for production
Who owns this AI system when it's in production? Who monitors it? Who gets called when it produces bad output? What's the escalation path? Document this before scale, not after. The moment AI is in production, you need an owner — not a committee, a person with a name and a phone number.
3
Build minimum viable AI governance
You don't need an AI ethics committee. You need three things: a list of what AI is and isn't allowed to do in this system, a mechanism to detect when output quality degrades, and a process for updating the system when the underlying workflow changes. Write these down. One page is enough.
4
Set the next 90-day review cadence
Before Month 3 ends, schedule the Month 6 review. What will you measure? Who will present? What would trigger an earlier review? Building the review cadence into the calendar before scaling is what separates AI that gets maintained from AI that gets abandoned after the initial novelty wears off.
Month 3 Deliverable

A production AI system with a named owner, documented operating model, and a scheduled 90-day review. Or: a documented pivot decision with a revised Month 1–3 plan for the next cycle. Either is a win. Stalled pilots are not.

The Five Pitfalls That Kill Mid-Market AI Initiatives

These aren't hypothetical. Each one I've watched derail what looked like a promising start. They connect directly to the failure patterns I described in Why AI Adoption Fails — and they map to the readiness gaps the assessment framework is designed to catch before you start.

Pitfall 01 Starting with technology instead of the problem

The vendor demo is compelling. The use cases look close enough to your situation. You buy, then figure out what you're solving. Six months later, you're not using 80% of what you paid for. The 90-day framework starts with the problem because every technology decision has to follow from it — not precede it.

Pitfall 02 Piloting without a baseline

You can't know if the AI worked if you don't know what "working" means relative to what you had before. Teams that skip the baseline spend Month 3 arguing about whether Month 2 was a success. The argument never gets resolved. The initiative stalls in ambiguity.

Pitfall 03 Scaling before the pilot has a verdict

Leadership gets excited about the pilot demo and approves expansion before the data is in. Now you're scaling a hypothesis instead of a result. The problems that would have been contained in the pilot are now in production, touching more workflows, with more people depending on outputs you haven't validated.

Pitfall 04 No owner after launch

The team that built the pilot moves to the next project. The AI system runs. Quality drifts — slowly enough that no single bad output triggers a review, fast enough that six months later the team has quietly stopped trusting the output. AI in production needs an owner the way any production system does. Define this before launch, not after the first complaint.

Pitfall 05 Treating the 90-day cycle as a one-time project

The framework works because it's a cycle, not a project. Your second 90 days should learn from your first. Your third should learn from your second. Companies that treat their initial AI roadmap as a one-time deliverable rather than an ongoing operating rhythm hit a wall the moment the initial momentum runs out. Build the review cadence into the calendar before you start.

"The 90-day cycle is not about moving fast. It's about moving with enough evidence to make the next decision well."

What This Looks Like in Practice

At Blanket Homes, the 90-day structure surfaced a constraint in Month 1 that would have cost months if discovered in Month 2: the property data they planned to use for AI-driven recommendations was inconsistently tagged across their CRM. Rather than building the pilot on unreliable data, we spent the first three weeks improving data quality on the subset of records the pilot would touch. The pilot was delayed by two weeks. The learnings were clean. The scale decision in Month 3 was straightforward.

At UniperCare, Month 1 produced a prioritized shortlist of five AI candidates. The team wanted to start with three simultaneously. The alignment session forced a choice: one. The one that made the cut was not the most technically interesting — it was the one the clinical staff had explicitly asked for. That user pull made Month 2 adoption significantly easier than any of the other options would have been.

In both cases, the 90-day structure didn't constrain what was possible. It prevented the failure modes that kill AI initiatives before they prove anything.

Start here: The AI Readiness Assessment — 7 Questions to Answer Before You Build a Roadmap →

Further reading: Why AI Adoption Fails Without Executive Alignment →

Further reading: What I Learned Advising 5 Companies on AI Adoption →