The enterprise world is currently haunted by a ghost in the machine: the belief that Large Language Models (LLMs) are “plug-and-play” employees.
The pitch from AI providers is seductive: “Connect Claude to your SharePoint, and by lunch, it will have mastered your operational history, parsed every complex margin calculation, and become an expert in your proprietary workflows.”
In reality, if you simply point an LLM at a messy file store, you don’t get an expert. You get a confident hallucinator. For service providers, the challenge isn’t just building the bot—it’s convincing the client that they aren’t buying a product, but building a Continuous AI Automation pipeline.
1. The Expectation Gap: ERP Power for a Fraction of the Cost
The client’s perception is often skewed by the linguistic polish of AI. Because an agent sounds smart, we assume it is smart. However, the real breakthrough isn’t just in the chat box—it’s in the architecture behind it.
- The Opportunity: We are entering a paradigm where integrating Data Engineering with sophisticated Workflow Automation (using tools like Make.com) can replicate ERP-level solutions.
- The Paradigm Shift: For service providers, this is a goldmine of solution-specific work. By connecting an agent framework to automated workflows, you aren’t just giving them a search tool; you are building a custom operating system that handles procurement or finance at a fraction of the cost of legacy software.
- The Price Barrier: Clients struggle to quantify the cost because they think the AI does the work for “free.” Our value is in the Translation Layer—the plumbing that ensures the agent’s logic triggers the right automation at the right time.

2. From Software Engineering to a Library of “Skills”
There is a popular meme in the dev world right now that “software engineering is just a bunch of .md files.” While it’s a joke, it reflects a fundamental truth in the agentic era: we are moving from rigid code to a library of Markdown-based Skills.
- The Skill Library: We store “Skills”—defined in structured Markdown—in repositories like GitHub. This allows for version control and transparency.
- The Data Reality Check: While the logic may live in a
.mdfile, the output is only as good as the data fed into the prompt. No model can guarantee zero hallucination if the underlying data is garbage. - The Moving Target: Optimizing how much data to provide and how to structure it remains a moving target. Because model reasoning capabilities improve on a monthly basis, the “optimal” prompt today might be redundant tomorrow. We aren’t just managing code; we are managing the evolving relationship between data and model intelligence.
3. The Hidden Cost: When Information Isn’t “Good Enough”
While Continuous AI Automation is the goal, it comes with a reality check: The Information Tax. You cannot automate what you cannot define.
- Quality Enhancements: A significant portion of the work involves handling data that isn’t ready for AI. Often, SharePoint files are fragmented, outdated, or contradictory.
- The “Cleanup” Sprint: Continuous automation requires an ongoing budget for data sanitization. Service providers must manage the expectation that “Continuous” means ongoing maintenance of the data’s integrity. If the input is low-quality, the automation will fail, no matter how good the model is.
4. The “Help” Button: The End-User as the QA Engineer
In this new world, “Finished” is a dangerous word. This is why we must design for Human-in-the-Loop feedback.
- The Interaction: A user chats with the agent.
- The Friction: The agent misses a key metric or the automation fails due to a data gap.
- The Feedback Loop: The user flags the error directly in the interface.
- The Sprint: That feedback flows into a backlog. Developers then refine the Markdown skill or perform the necessary data quality enhancements to “teach” the framework how to handle that specific case.
5. Designing for the Lifecycle
If you want to build a framework that actually works, you have to stop selling a destination and start selling the Journey of Improvement.
| Phase | Strategic Focus | Technical Tooling |
| Ingestion | Cleaning the SharePoint “Swamp” | Data Pipelines / RAG |
| Orchestration | ERP-Level Logic | Make.com / Workflows |
| Logic | Building Modular “Skills” | .md Files / GitHub |
| Refinement | Handling Information Gaps | Sprints / Monthly Model Re-tuning |
The Conclusion: The Bridge is the Product
The real story of AI in the enterprise isn’t about the model you choose; it’s about the ecosystem you build around it.
We are providing a way to bypass multi-million dollar ERP licenses in favor of lean, agentic frameworks. But that framework requires a management layer (GitHub), a feedback loop (Sprints), and a relentless commitment to data quality. The value isn’t in the “Magic Wand”—it’s in the factory that keeps the wand grounded in clean, actionable data.

Leave a comment