The Real Reason AI Adoption Stalls Inside DMOs (And It's Not the Technology)
- Jason Swick

- 6 hours ago
- 6 min read

I've watched DMOs go through technology transitions before. New websites. CMS migrations. The shift to digital marketing. Social media. And the pattern is almost always the same.
The technology isn't usually what fails.
What fails is everything around it: the culture, the internal permission structure, the gap between what leadership says it wants and what it actually makes safe to try.
The tools land and then quietly stall because the organization wasn't really ready for them, even when it thought it was.
AI is following the same pattern. Just faster, and with a lot more noise around it.
Greg Oates at Matador Network recently brought together a group of DMO CEOs specifically to talk through how to lead AI adoption inside their organizations. The conversation was candid, and honestly some of the most useful industry thinking I've come across on this topic.
What came out of it wasn't a list of tools or a technology roadmap. It was a leadership and culture conversation.
I think that says more about where most DMOs actually are right now than any vendor presentation I've seen.
The thing that stuck with me most from what came out of that discussion wasn't about AI at all. It was about shame.
One observer who has been sitting with DMO teams through this transition described a staff member in finance who said there's "almost a culture of shame about using AI." And then during a break, a marketing executive quietly admitted she'd built a couple of custom tools for her own work, but didn't really want to tell anyone.
I've been thinking about this a lot.
Because when people hide how they work, you lose the ability to learn from what's working. Shame-based adoption is probably the worst kind. It's invisible to leadership, impossible to build on, and it creates a weird dynamic where your most curious, most capable people are doing their best work in secret.
If that's happening in your organization, the fix probably isn't a policy or a training program. It's making it visibly okay to experiment, talk about it openly, and recognize the people who are figuring things out.
Early adopters who feel genuinely welcomed tend to bring others along. The ones who feel like they're doing something vaguely forbidden just keep it to themselves.
The CEO fluency gap is a related challenge, and it's probably the one people are least comfortable naming out loud.
Some leaders across the industry feel they don't know enough about AI to confidently champion it. That's understandable given how fast everything is moving. But the effect it has on the rest of the organization is real.
Teams tend not to embrace what leadership can't articulate. Not because they're waiting for permission exactly, but because the tone gets set from the top whether you intend it to or not.
I don't think a DMO CEO needs to be the most technically fluent person in the building. That's not the job. But there's a version of fluency that matters, which is understanding enough about what AI can actually do for your specific organization to have a real conversation about it with your team, your board, and your stakeholders.
The analogy I'd use is that you wouldn't expect a CEO to build the website, but you'd expect them to understand what a good one looks like and why it matters.
AI isn't really different.
The CEOs in Greg's council who seemed furthest along weren't the ones who knew the most about the technology. They were the ones who had made it safe inside their organizations to try new things, talk openly about what wasn't working, and build on what was.
That's a leadership posture, not a technical skill.
One tension that came up in the conversation was whether to inspire AI adoption among staff or mandate it. The group landed somewhere in the middle, which is probably the right instinct, but I think the more useful reframe is this: the inspire-versus-mandate debate is often a sign that the organization hasn't gotten clear enough on what good actually looks like.
When people don't know what they're working toward, inspiration feels abstract and mandates feel arbitrary.
Both lose.
What tends to work better is giving people a concrete, achievable picture of success and then getting out of the way. Something like "by end of Q2, everyone on the team has used an AI tool to complete a real work task" is more useful than a philosophy about adoption. It's specific, it's measurable, and it gives your early adopters something to rally around rather than just something to believe in.
The DMOs that seem to be making real progress aren't necessarily the ones with the most sophisticated AI strategies. In my experience, the most useful automations are rarely the impressive-looking ones. Not the complex multi-step workflows with a dozen nodes that take weeks to build. They're usually the boring, obvious ones. The task someone does the same way every single week that takes two hours and shouldn't. That's where the real time comes back. And usually, they're the ones where someone made the first step small enough to actually take.
I want to be specific about where I think the real opportunity actually is, because I don't think it's where most people are looking.
The DMO leaders who are seeing tangible results aren't getting them from content tools or chatbots, at least not primarily. They're getting them from automating specific, repetitive workflows their teams deal with every day. AI agents that handle a defined task consistently, reliably, and without someone having to think about it each time.
One story that came out of Greg's council discussions involved a partner relations person at a DMO who was initially skeptical, genuinely worried that using AI would make her relationships feel less authentic. After working with a couple of automated workflows, she was saving five or more hours a week. From just a couple of tasks. And that's the part that sticks with me.
Five hours a week compounds. Over a year that's more than 250 hours returned to one person. Now think about what that looks like across a ten-person team where everyone finds even one workflow like that. You're not talking about marginal efficiency gains. You're probably talking about the equivalent of adding capacity without adding headcount. That's a pretty different conversation than most people expect when they hear "AI agent."
That kind of result is available to most DMO teams right now.
The reason more organizations aren't there yet usually isn't that the tools are too complex. It's that the upstream work hasn't been done: mapping out what the actual workflows are, identifying the specific tasks that are repetitive and well-defined enough for automation, and giving any AI tool enough context about the organization to be useful. That groundwork takes a few hours and doesn't require a vendor or a consultant. It just requires someone to sit down and actually do it.
So here's the most practical thing I can suggest, and you could probably do this week.
Run a 30-minute meeting with your team and ask everyone to name one task from the past week that felt repetitive, draining, or like it was keeping them from work that actually matters. Write everything down. Don't evaluate it, don't prioritize it in the meeting, just collect it.
That list is probably your AI roadmap.
Not a strategic framework, not a vendor evaluation process, just an honest inventory of where your team's time is going. I know that sounds almost too simple, but over my entire career working with DMO teams, most organizations already know exactly where the friction is. They've just never been asked to say it out loud in a room together.
The answers tend to be pretty consistent: partner communications that follow the same structure but get written individually each time, media requests for destination information that sit in someone's queue for three days, data that exists somewhere but takes two hours to actually compile into something shareable. Those are exactly the areas where AI gives time back quickly, and they're worth knowing before any vendor conversation.
I'll close with something I keep coming back to. DMOs that figured out social media and digital marketing early didn't do it because they had bigger budgets or smarter people. They did it because someone in leadership made it safe to try things, made it clear what they were working toward, and normalized talking about what was, and wasn't, working.
I've watched that play out across enough organizations over the years that I'm pretty convinced it's the pattern. The technology is almost never the hard part. It's creating the conditions where people feel like it's okay to figure it out together, out loud, without everything having to be polished before they share it.
The technology is ready. The question is whether the organization is.
I write about AI and destination marketing for DMO professionals. If you want pieces like this a few days before they go public, you can join the early access insiders list at https://www.swix.ai/#insider



