Your organization doesn’t have a change problem. It has an experience problem.

Most transformation efforts focus on processes and tools. Culture change actually starts somewhere else entirely.

I was first introduced to the change pyramid in 2013, and it fundamentally shifted how I think about organizational transformation. The idea is deceptively simple: if you want to change culture, you can’t start with culture. You have to start with experiences.

What people experience shapes their beliefs. What they believe drives their behavior. Enough changed behavior, over time, becomes culture. This sequence is non-negotiable — and it’s exactly why so many transformation efforts fail. They start with process change and wonder why the culture doesn’t follow.

Article content

I’ve seen this play out in organizations of every size. Leaders invest heavily in new frameworks, tools, and playbooks. They announce the change. They train people. And then they wait for hearts and minds to follow. They rarely do — at least not through that path.

The approach I keep coming back to — across teams, programs, and enterprise-wide efforts — is what I call an Experimentation Framework. The psychological mechanic underneath it is what makes it work:

The experimentation framework

  1. Identify the opportunity. Name the specific friction point or growth area — not a vague goal, a concrete one.
  2. Build an actionable hypothesis. “We believe that if we try X, we will see Y.”
  3. Quantify how you will measure success. Define what “working” looks like before you start — not after.
  4. Set a timebox. A defined end creates safety. People try harder when they know they can revert.
  5. Measure the results. Honestly. Without predetermined conclusions.
  6. Accept, reject, or pivot. This is the most important step most organizations skip entirely.
Article content
The Experiment Framework

What makes this different from just “running a pilot” is the setup. When you present someone with a process change, resistance often follows — especially if their existing experiences don’t align with what’s being proposed. But when you frame the same idea as an experiment with a timebox and a real exit ramp, something shifts. The permanence factor disappears. People are willing to try things they’d otherwise push back on when they don’t feel trapped by the outcome.

Experiments also activate a growth mindset in a way mandates never can. When someone is a participant in the problem-solving process — not just a recipient of a change initiative — they become a stakeholder in the outcome. That’s a fundamentally different relationship to have with the work.


How an experience changed my own beliefs

I was leading a transformation effort at a large online retailer. The organization had a proven model they’d used across hundreds of teams — training, role adoption, change management, hands-on coaching — and for most groups, it worked reasonably well. I was tasked with bringing the last holdout group into the fold: the data science organization.

Day one, I walked into a room of PhDs who immediately started asking questions I wasn’t fully prepared for. Do you have empirical data showing why this approach prevents the problems you’re describing? What’s the evidence that this method is better than what we’re doing now? These weren’t hostile questions — they were the natural language of people who live and breathe data. But our standard playbook had no good answers for them.

I went back to my director and said: I don’t think our cookie-cutter approach is going to work here. And that’s when it clicked for me. These were people who ran experiments for a living. The way to work with this group wasn’t to bring them a solution — it was to bring them a framework they already trusted.

Instead of presenting a change plan, I ran workshops to surface the challenges they were actually experiencing. What’s getting in the way of doing your best work? What have you tried? What’s worked and what hasn’t? We used voting exercises to prioritize the themes that resonated most across the department. Then, rather than prescribing solutions, I crowdsourced their participation in designing experiments — framing each one with a clear hypothesis, defined success criteria, an agreed-upon timeline, and an intentional plan to measure results.

The results were mind-blowing. This group went from high resistance and deep skepticism to high adoption and high value — faster than any group I’d worked with using the standard model.

What made it work was that they could feel real problems getting better. Problems from their actual day-to-day work, not abstract transformation objectives handed down from leadership. Getting better meant they could do more of what they loved — the data science work — with more clarity and less friction. As a change leader, I was able to weave the enterprise transformation goals into the bottom-up challenges they had identified themselves. They were onboard because they owned it.

The moment that still sticks with me: members of that team eventually stood up at an all-hands and spoke about the impact the transformation initiative had on their work. Voluntarily. Enthusiastically. It doesn’t get better than that — and it happened because they built new experiences, not because we handed them a new process.

This is why the framework matters — and why it maps so directly back to the change pyramid. Culture didn’t change in that group because we trained them on something new. It changed because they experienced something better. The experiment was the vehicle. The experience was the point.


This is way too relevant to the AI space

This brings me to what’s happening with AI right now.

The pattern I’m seeing in AI adoption is almost identical to every major transformation wave I’ve lived through. Organizations deploy tools. They train people. They wait. They wonder why adoption is stalling, why teams are reverting to old behaviors, and why the ROI isn’t materializing. In most cases, AI is being treated as a technology deployment problem when it’s actually a culture change problem.

The organizations achieving meaningful AI adoption share a common thread: they’re not just giving people tools — they’re engineering the experiences that make people want to use them.

An experimentation approach reframes the whole effort. Instead of “we are rolling out AI tools,” the question becomes: what’s one thing we could try, for four weeks, with a clear hypothesis and a way to measure whether it’s working? That question is far easier for a skeptical team to engage with. And when the experiment works — when someone experiences firsthand that AI made their work faster, clearer, or more impactful — that experience does the culture-change work that no mandate ever could.


Some key learnings

A few things I’ve learned about running these well:

  • The hypothesis matters more than the tool. Don’t start with “let’s try this AI feature.” Start with a real friction point, and then ask whether AI could reduce it. The specificity of the problem determines the quality of the experiment.
  • The “reject” option has to be real. If there’s no genuine possibility of reverting, you’ve built a disguised rollout. People will know the difference, and you’ll lose the trust that makes the next experiment possible.
  • Celebrate the learning, not just the wins. Some of the most valuable experiments I’ve run are ones where we proved an assumption wrong. That’s data. That’s progress. A team willing to call a failed experiment a learning moment has already done the cultural work that makes transformation possible.

The organizations I’ve watched successfully navigate large-scale transformation — in agile adoption, in operating model redesign, in AI integration — stopped trying to change culture directly and started engineering the experiences that let culture change itself.

That’s still the most reliable path I know.

What’s one experiment your team could run in the next 30 days? I’d love to hear what you’re working on.