
We’ve been going to Adobe Summit for a while now, and it continues to be one of the more useful weeks of the year for our team. It gives us time with clients and partners, along with a chance to step out of day-to-day delivery and get a clearer view of how things are actually evolving across analytics and customer experience.
This year followed a pretty familiar pattern.
We made it out to the Sphere. Had a few debates about whether The Wizard of Oz has always been that intense or if the venue just makes everything feel bigger. But most of the conversations we had throughout the week kept coming back to the same thing.
AI is everywhere.
Every keynote, every demo, every roadmap. AI is baked into everything across the Adobe stack.
But that’s not the interesting part.
The real takeaway is that AI isn’t the differentiator anymore. It’s the baseline.
What matters now is how well organizations can support it, and that comes down to three things: the data behind it, the systems connecting it, and the ability to operationalize it in a way that actually holds up.
One of the more honest lines we heard all week captured it simply: AI doesn’t fix bad data, it amplifies it.
Once you start looking at the stack through that lens, a lot of what we saw at Summit starts to make sense.
Most organizations we work with are not starting from scratch. They already have the core Adobe stack in place, including Customer Journey Analytics (CJA), Real-Time CDP (RT-CDP), and Journey Optimizer (AJO), and they can usually describe how it is supposed to work end to end.
CJA surfaces insight, RT-CDP builds audiences, and AJO activates journeys across channels.
On paper, it is clean and connected. In practice, it still behaves more like a sequence than a system.
Insight moves between teams, segments are rebuilt across tools, and journeys are assembled step by step in ways that reflect organizational structure more than system design. Everything works, but it requires more coordination and manual alignment than most teams would prefer.
That model made sense when reporting cycles were slower and activation did not need to happen in real time. It starts to break down as expectations move toward speed, consistency, and continuous activation.
AI is now embedded directly inside the systems teams already use.
It is no longer sitting off to the side as an experiment layer. It is inside CJA, RT-CDP, and AJO, which changes how work should move.
Instead of a linear process where teams analyze behavior, export findings, rebuild audiences, and then activate campaigns, the direction is toward environments where those steps are more connected and sometimes happening at the same time.
That is where friction becomes visible.
The issues are familiar, but they show up faster now:
None of this is new. What has changed is visibility.
When systems can move faster, anything that slows them down becomes the constraint.
One of the areas we spent the most time on was decisioning within Journey Optimizer, because this is where AI moves from concept to execution.
At its core, decisioning is about determining what experience should be delivered to a specific customer, at a specific moment, based on what the system knows about them.
AJO does this by combining a centralized catalog of decision items (offers, messages, content) with rules, policies, and AI-driven ranking to select the most relevant experience in real time.
This is the execution layer — where eligibility is evaluated, constraints are applied, and the next best action is determined inside a journey.
That includes:
Experimentation is also expanding beyond traditional A/B testing. Teams can now test variations in journey logic itself (including paths, sequencing, timing, and channel mix) and iterate faster than legacy analytics cycles would allow.
So decisioning isn’t just about selecting content. It’s about continuously shaping the experience as it’s delivered.
Separate from Journey Optimizer’s decisioning layer, Adobe introduced Agent Orchestrator — a reasoning and coordination layer that sits above Adobe Experience Platform.
Where decisioning operates inside the moment of execution, Agent Orchestrator works across systems to interpret intent, plan actions, and coordinate workflows end to end.
It connects goals to execution by orchestrating specialized agents across Adobe applications.
Within that system, different agents support different parts of the workflow:
Together, these agents help teams move more quickly from insight to coordinated execution across the stack.
Even with these advancements, the requirement underneath hasn’t changed.
For decisioning and orchestration to work reliably, a few things still have to be true:
If those foundations are not solid, the system still runs—but the outcomes become harder to trust and harder to explain.
AI does not usually fail in obvious ways. It produces outputs that look reasonable, even when the underlying inputs are not fully aligned.
There is a lot of conversation about AI making teams more efficient, and in some areas that is true. What we are seeing more consistently, however, is a shift in where the work actually happens.
Teams are spending less time moving data between systems, rebuilding audiences, and stitching outputs together, and more time defining the structure that those systems rely on.
That includes data modeling, schema design, governance, and alignment on business definitions, along with ongoing validation and monitoring of system behavior.
Even capabilities like AI Assistant inside Journey Optimizer reinforce this shift. It makes it easier to generate audiences, troubleshoot workflows, and navigate the platform, but it also increases reliance on having a strong foundation in place.
When execution becomes easier, the quality of what you are executing against matters more.
The technology is ahead of most operating models. Organizations already have:
The capability is not the issue. The gap is in the connective layer:
Without that, everything still functions. It just does not function as a system.
And when AI is layered on top, those gaps become more visible, not less.
This is the work we spend most of our time on.
Not introducing new platforms or adding complexity, but helping teams get the foundation right so their existing systems actually work the way they were intended to.
That includes:
Because AI does not create good outcomes on its own. It depends entirely on the system underneath it.
AI is no longer something teams are experimenting with. It is something their systems are expected to support as part of how work gets done.
That changes the conversation.
It is less about what the tools can do, and more about whether the foundation is strong enough to support them.
When the data is right, the systems are connected, and decisioning is clearly defined, AI becomes genuinely useful.
When those things are not in place, it simply accelerates the existing gaps.
Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.