I've been advising companies on AI adoption across five very different industries. Sports entertainment. Music AI. Property technology. Healthcare. Media. Different business models, different org structures, different starting points.

The technology questions are rarely the same. The organizational questions are almost always identical.

What follows is a distillation of what I actually saw — the real blockers, the pivots that worked, and what separated the companies that moved quickly from those that spent eighteen months spinning. Company details are described at the category level; specific commercial details are not included.

The Five Engagements

Case Study 01
Kings League — Sports Entertainment
AI for fan engagement, content, and data intelligence

Kings League is one of the fastest-growing sports properties in the world — built from scratch by Piqué and Ibai, scaling to tens of millions of fans across digital platforms in just two years. When I came in, the challenge wasn't enthusiasm for AI. The challenge was abundance. The team had identified compelling AI opportunities across fan engagement, broadcast, mobile, content creation, and competitive data. All of them were real. None of them had priority.

The advisory work was almost entirely about constraint. We built a forcing function: rank AI investments by two axes — revenue proximity (how directly does this connect to a business outcome within two quarters?) and infrastructure leverage (does this compound, or is it a standalone tool?). The answer cut the AI roadmap from eleven initiatives to three. Those three shipped. The others are still "being evaluated" twelve months later at comparable organizations that didn't make the same cuts.

↗ AI roadmap compressed from 11 initiatives to 3 — all three shipped on schedule
Case Study 02
Cyanite — Music AI
AI-powered music metadata and licensing intelligence

Cyanite is building AI infrastructure for the music industry — intelligent tagging, mood analysis, and search for music libraries at scale. The product thesis is technically compelling and the team is strong. The challenge when I came in: the AI product roadmap and the fundraising narrative were evolving independently. What the team was building, what investors were being told, and what customers were actually paying for had begun to diverge.

The advisory work here was largely about convergence. We spent significant time aligning the internal product thesis — what AI capabilities they were actually building and why — with the investor-facing narrative. The goal wasn't spin. It was finding the version of the story that was both true and compelling. When those two things align before the fundraising process, conversations are faster and terms are better. When they don't, you're negotiating against your own ambiguity.

↗ Aligned product roadmap with investor narrative ahead of fundraising round
Case Study 03
Blanket Homes — Property Technology
AI in property management workflow and tenant experience

Blanket Homes is building AI-powered tools for property management — automating workflows across leasing, maintenance coordination, and tenant communication. The product ambition is high. The constraint is real: property management is a trust business. Landlords and tenants have relationships that extend years. AI that reduces friction for the platform but introduces friction or opacity into those relationships destroys the core product, not just a feature.

The advisory work here was about sequencing. Before any engineering resources were committed, the leadership team needed to answer a specific question: where does AI serve the relationship, and where does it threaten it? We mapped the workflow end-to-end and categorized every AI opportunity by that lens. Automation behind the scenes (maintenance routing, document processing, renewal triggers) went straight to the roadmap. AI in direct tenant communication required a different approach — a more careful, human-in-the-loop architecture. Getting leadership aligned on that line before engineering started saved months of rebuild.

↗ Workflow segmentation framework adopted — prevented costly architecture rebuild
Case Study 04
UniperCare — Health Technology
AI in patient support and clinical workflow coordination

UniperCare operates in a space where the stakes of AI errors are not measured in conversion rates — they're measured in patient outcomes. The leadership team understood this. What they hadn't resolved was how that understanding should translate into product decisions. There was an implicit tension between the commercial pressure to ship AI-powered features and the clinical and regulatory caution that the product domain requires.

The advisory work was about making that tension explicit and giving it structure. We built a three-tier AI decision framework: AI that operates fully in the background (scheduling, documentation, routing) with no clinical adjacency; AI that supports clinical staff with clear human-in-the-loop requirements; and AI that directly influences patient experience, which required a separate review process and explicit regulatory mapping. Making those tiers visible gave the product team a clear vocabulary for every new AI proposal — and gave leadership confidence that moving fast on the first tier didn't create exposure in the third.

↗ Three-tier AI decision framework — accelerated background AI deployment while maintaining clinical safeguards
Case Study 05
Shuffle-X — Media
AI-powered content discovery and audience development

Shuffle-X is building in a media space where AI is simultaneously the competitive opportunity and the identity risk. The question every media organization faces: how much AI is the product before the product stops being yours? For Shuffle-X, this wasn't a philosophical question — it had direct implications for advertiser relationships and audience trust, both of which are the business.

The advisory engagement focused on establishing clear editorial boundaries before the AI roadmap was finalized. We separated AI as infrastructure (recommendation, optimization, workflow) from AI as content (generation, curation presented as editorial). The first category had essentially no internal resistance once the distinction was drawn. The second required explicit commercial and brand guardrails that didn't previously exist. Establishing those guardrails before they were needed — rather than after the first audience backlash — changed the speed and confidence of the product development process.

↗ Editorial AI governance framework — enabled infrastructure AI to ship while protecting brand integrity

Three Patterns Across All Five

Looking across these engagements, three things were consistently true — regardless of industry, company size, or how technically sophisticated the team was.

1. The prioritization problem is universal

Every company I've worked with has too many AI opportunities and not enough organizational bandwidth to pursue them seriously. This is not a failure of imagination — it's actually a sign of a healthy, curious team. But without a forcing function for prioritization, AI investment disperses. You get pilots everywhere and production deployments nowhere.

The fix is usually a simple framework: revenue proximity times organizational readiness. High revenue proximity (this connects directly to a business outcome within two quarters) and high organizational readiness (the team that would use this is already bought in) goes first. Everything else waits. It sounds obvious. It almost never happens without an outside forcing function.

2. The investor narrative and the product roadmap must be synchronized

This matters most in companies that are fundraising actively, but it applies broadly to any organization where the "AI story" is told externally — to customers, partners, or boards. When the story and the roadmap diverge, every external conversation creates internal debt. You commit to capabilities that haven't been prioritized. You describe AI sophistication you haven't built yet. The gap compounds.

"When the AI story and the product roadmap diverge, every external conversation creates internal debt."

Getting these aligned — not perfectly, but directionally — before external conversations happen is one of the highest-leverage things advisory can do. It changes the fundraising conversation from negotiating around ambiguity to building on conviction.

3. Workflow AI ships. Standalone AI stalls.

Across all five companies, the AI investments that progressed to production had one thing in common: they lived inside existing workflows. The AI that surfaced maintenance requests in the property management tool. The AI that auto-tagged music metadata in the existing library interface. The AI that routed patient support tickets in the coordination system.

The AI investments that stalled were almost universally standalone — a new app, a new dashboard, a new interface that required behavioral change from users who hadn't asked for it. The technology wasn't the issue. The adoption friction was.

This is a product principle as much as an AI principle: meet people where they already are. AI that requires a behavior change to capture its value will underperform AI that delivers value inside the behavior that already exists.

Three Things You Can Act On This Week

If you're leading AI adoption at your company, these three questions are worth taking into your next leadership session:

  1. Force-rank your AI initiatives by revenue proximity and organizational readiness. If you have more than four initiatives ranked "high priority," you don't have a strategy — you have a wishlist. Cut until the top three are ones the whole leadership team can actually resource and own.
  2. Audit the gap between your external AI narrative and your internal roadmap. Ask your CEO what you're building in AI. Ask your CPO. Ask your CTO. If the answers don't match, fix that before the next investor or board conversation — not after.
  3. For every AI proposal, ask: does this live inside an existing workflow? If the answer is no, the adoption plan needs to be three times as detailed as the technical plan. If there is no adoption plan, defer the initiative until there is one.

The companies that are winning in AI aren't the ones with the most advanced models or the largest AI teams. They're the ones that moved with clarity — on what they were building, why, and where it fit in how their organization actually works.

That clarity is not a technology problem. It's a leadership and advisory problem. And it's solvable.

Related reading: Why AI Adoption Fails Without Executive Alignment →

Related reading: The AI Readiness Assessment: 7 Questions Every Executive Should Answer →