S'abonner au flux RSS

This didn’t start as a formal AI project. It started with our CEO’s keynote—a spark at a company event that made our IT teams want to see what was possible. How could AI help reduce friction across the content and tools we already use? Red Hat’s open, curious, tinker-driven culture kicked in fast. With new tools at their fingertips and a flood of interest in leveraging AI internally, we did what Red Hatters do: we experimented.

While exploring AI, chatbots, and learning what our systems, teams, and content were ready for, the cast of characters changed along the way. People rolled on, rolled off, and the project weathered more than one handoff, but this wasn’t a project that failed, it was one that stretched. When it stalled, we didn’t give up. We mapped the damage, understood the limits, and rebuilt with intent. We were early. We were optimistic. And we kept going, even when it got messy. This is our field journal: three phases of an AI assistant story that’s still evolving.

AI challenge: Construct a chatbot that can leverage constantly changing, unstructured go-to-market (GTM) content to reduce sales friction by providing brief and accurate answers to seller questions as well as links to more detailed information.

The build: We built this assistant on Red Hat OpenShift Platform Plus (MP+) and Red Hat OpenShift AI, using Granite as the core model, giving us enterprise-grade model serving and deployment. LangChain orchestrated the retrieval flow, and PGVector handled vector storage (an extension to the popular PostgreSQL database). We used MongoDB for logging the interactions with the AI. To preserve context from long-form documents, we used structure-aware tools like Docling and experimented with Unstructured’s Python libraries to pull speaker notes from slides. While that code didn’t make it into production, the experiment revealed just how crucial structure and formatting are to successful retrieval—lessons that now guide our preprocessing efforts.

Phase 1: The hope phase

“It worked… and we can make it work even more”  

We had early traction and assumed more data would just mean more value.

Originally built as a ticket deflection chatbot for an internal tool, stakeholders and early testers were optimistic the initial release could scale to include other data sources and other use cases. It had worked exceedingly well on approximately 300 structured ServiceNow knowledge base articles, so surely it could handle 5,000+ unstructured, sprawling, real-world go-to-market materials too. Right? Any PDF, PowerPoint, and document is the same… or so we thought.

But it didn’t give us the expected answers nor did it suggest content we were hoping for. We tried everything: dashboards, filters, scoring models, tag rules, AI-friendly targeted template sections for our presentations, prompt guidance, and more. We didn’t yet realize that our tuning efforts were constrained by a system not designed for this type or scale of content, so we kept iterating, believing we could solve it with just the right tweaks.

Lessons playbook: Phase 1 – where assumptions begin

  • Don’t confuse early wins with AI readiness: Success in semi-structured knowledge base articles doesn’t mean you’re ready for unstructured, enterprise-scale content. That early win gave us false confidence
  • Content shape matters: Know your data: Is it visual, structured, tagged, or even machine-readable? Your system setup must match the shape and complexity of the content it’s retrieving from
  • Metadata only helps if it's used: Unused or inconsistent tags won’t improve retrieval. Metadata isn’t magic, it has to be activated and integrated into the system’s logic
  • Curation matters when classification signals are limited: If tagging is unreliable and/or unavailable, asset curation becomes essential to improve retrieval accuracy

Red Hat reflection

Looking back now, we saw the first release of the AI Assistant worked as a ticket deflection chatbot and had a response accuracy well over 80%, and promising user feedback. Early wins fed our optimism, and Red Hat’s open source mindset helped us experiment boldly, even before we fully understood what we were building toward. But as we soon learned, success with structured content didn’t prepare us for the complexity to come.

Learn more

To help enterprises build a solid foundation of knowledge for AI, understand ethical considerations, as well as the value of open source in AI, take the no-cost Red Hat AI Foundations course
 

In Phase 2: The crash, we’ll take you into the thick of it—where differences in scoring, data complexity, and cross-team misalignment quietly stalled progress until we finally discovered what was really going wrong.


À propos de l'auteur

Andrea Hudson is a program manager focused on making AI tools useful in the messy world of enterprise content. She has been at Red Hat since 2022, she helps teams untangle complexity, connect the dots, and turn good intentions into working systems. Most recently, she helped reshape an AI chatbot project that aimed to surface the right go-to-market content—but ran into the chaos of unstructured data.
Her background spans product launches, enablement, product testing, data prep, and evolving content for the new AI era. As a systems thinker with early training in the U.S. Navy, she relies on what works in practice, not just in theory. She’s focused on building things that scale, reduce rework, and make people’s lives easier.
Andrea writes with honesty, sharing lessons from the projects that don’t go as planned. She believes transparency is key to growth and wants others to have a starting point by sharing the messy middle and not just the polished end.
When she’s not wrangling AI or metadata, you’ll find her tinkering with dashboards, learning low/no-code tools, playing on Red Hat’s charity eSports teams, recruiting teammates, or enjoying time with her family.

Read full bio

Parcourir par canal

automation icon

Automatisation

Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

open hybrid cloud icon

Cloud hybride ouvert

Découvrez comment créer un avenir flexible grâce au cloud hybride

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Virtualization icon

Virtualisation

L'avenir de la virtualisation d'entreprise pour vos charges de travail sur site ou sur le cloud