Inside the Stack: Exploring AI and the Content Supply Chain
Last month, Real Story Group hosted a virtual session of our MarTech Stack Leadership Council focused on a timely and complex topic: how AI is transforming enterprise content supply chains.
Amid the ongoing hype around generative AI, this session revealed several reality-grounded insights, drawn from real enterprise experience — including a candid case study of a large-scale, AI-powered campaign executed under tight constraints.
As a brief aside — and perhaps fittingly — I attempted to use GenAI to create a simple image for this article. After multiple frustrating iterations and off-brand results, I gave up and pieced together a few of the less-wrong outputs. It was a helpful reminder: even seemingly simple creative tasks can expose the limits of today’s GenAI tools. The image below was done the old fashioned way.
Case Study: Speedy Creation, Sluggish Review
One Council member shared their experience managing a major global campaign. With a hard deadline and high expectations, their team employed generative AI to produce thousands of images and text variants across multiple languages.
Assets were created using GenAI tools tied to a proprietary DAM, with campaign briefs and metadata feeding the generation pipeline. In theory, the resulting content supply chain became highly automated — from creation to storage, activation, and reporting.
In practice, things were more complex.
“We created content faster than we ever have before,” the campaign lead shared. “But reviewing and correcting it took just as long — if not longer than doing it manually.”
Despite some attractive upstream automation, several bottlenecks emerged:
- Heavy rewriting required to align text with brand tone and regional nuance
- AI-generated visuals frequently missed the brand mark, even when technically correct
- Localization issues (e.g. text fit and design adaptation) required manual rework
- Proofing processes involved high-level stakeholders, adding further delays
- Metadata tagging was inconsistent and difficult to manage at scale
The end result? The campaign hit its timeline — but only with extended hours, morale pressure, and only modest net productivity gain.
Council Takeaways: AI Isn’t the Main Challenge, Process Is
The story sparked widespread resonance. Across the Council, members echoed the same pattern: AI speeds up creation, but breaks down without strong support infrastructure.
Some common pain points:
- AI shifted the bottleneck from production to orchestration and quality control
- Pilot programs outpaced internal processes especially around metadata and reuse
- Translation and localization workflows couldn’t always keep pace with automated output
- Review protocols were often informal or inconsistent and slowed the end-to-end pipeline
- Measurement was difficult with no consistent metric to determine ROI
Still, members agreed the experiment was valuable, especially in revealing the particular pros and cons of the specific toolset employed. While the campaign may not have achieved cost savings, it clarified what foundational work is still required to scale AI effectively.
RSG’s Perspective: AI Governance Is the New Enablement
At Real Story Group, we believe AI governance must evolve from passive oversight to active enablement. Enterprises seeing the most success with GenAI are:
- Defining metadata standards across teams
- Documenting prompt templates and model selection guidelines
- Creating centralized libraries of reusable content and templates
- Promoting AI fluency across content, marketing, and ops teams
- Aligning reporting frameworks with asset-level performance data
Although critical controls are essential to enterprise AI deployment, governance ultimately needs to move beyond restriction to alignment. When metadata, tooling, and strategy get aligned, GenAI can deliver real business value. Much like testing and optimization before it, this work will require iterative development and a concerted program, rather than just ad-hoc pilots.
Scaling the Supply Chain: From Fragmented to Modular
Another Council member shared a broader view of evolving content operations. Their content ops team is actively working to shift from static, channel-bound content to a more modular, dynamic framework — supporting both one-to-many and personalized experiences.
Key moves include:
- Adopting an omnichannel content platform to consolidate asset storage and delivery
- Building modular content components that can flex across regional needs – and ideally channels
- Working with agency partners to ensure naming conventions, rights management, and delivery systems are consistent
- Investing in foundational systems hygiene, such as data cleanup and governance protocols
This firm is also exploring AI-based automation and content value enhancement, but recognizes that a core, richly-categorized content set would be a critical starting point.
Members agreed: modular content strategy is critical — not just for personalization, but for reuse, scale, speed and…ultimately…deploying AI effectively
Bottom Line: AI Won’t Fix a Broken Supply Chain
The overall message from this Council session? AI can only amplify what’s already working, and not working. Without a strong foundation — metadata, governance, systems interoperability, and review workflows — generative AI adds more complexity than clarity.
A few closing observations:
- Many organizations still lack agreed-upon metrics for measuring GenAI effectiveness
- The fastest tech won’t help if workflows are stuck in committee
- Smaller vendors are useful for experimentation, but integration and sustainability remain key concerns
- Composability is appealing, but only if foundational systems (like metadata and DAM/OCP) are ready
For now, AI in the content supply chain remains an exciting work in progress. But progress is being made — and the organizations that succeed will be those investing in the whole chain, not just the generative parts.
MarTech Stack Leadership Council: Atlanta | May 6–8, 2025
Join fellow enterprise leaders for two days of candid discussion, hands-on exercises, and peer benchmarking. We’ll map your stack, assess AI effectiveness, and preview RSG’s latest research on Agentic AI. Learn more about the Real Story Group MarTech Stack Leadership Council.