AI in ERP: separating hype from practical application
First published on September 14, 2025 , updated on September 16, 2025If you are a leader in a professional services firm, you are right to be skeptical of the endless claims surrounding Artificial Intelligence.
This is because many software vendors are focused on what AI could do in a theoretical future, rather than what it should do to solve your most pressing challenges and ambitions today. VOGSY focuses on success that is measured in revenue growth, project profitability, and client satisfaction.
This article is part of our pragmatic framework for C-suite leaders to evaluate AI in an ERP context. It will help you separate genuine, value-adding capabilities from marketing hype and empower you to ask the right questions of any software vendor. The goal is to move beyond a commoditized discussion of features toward a strategic dialogue about grounded, explainable, and governed intelligence.
The anatomy of AI hype
The current wave of AI hype in the enterprise software market is characterized by vague claims and a focus on technology for its own sake. This approach leads to feature-stuffing—adding AI capabilities because competitors are doing it, not because they solve a validated customer problem.
This stands in stark contrast to a more pragmatic philosophy. The VOGSY approach, for instance, is rooted in a founder's vision to remain focused on a specific field of expertise: building enterprise-grade, project-based ERP software for international professional services firms. This philosophy dictates that AI should not be a sprawling, speculative endeavor. Instead, it must be a focused tool applied to solve these firms' real-world operational challenges.
The difference is fundamental. One approach chases trends, leading to a "black box" of features that are often poorly understood and difficult to control. The other, VOGSY's approach, starts with the problem—how to improve project predictability, reduce administrative friction, and protect margins—and then applies the appropriate technology within a strict framework of control.
A pragmatic evaluation framework for leaders
Leaders need a simple set of criteria to cut through the hype effectively. When evaluating any AI offering from an ERP vendor, you should ask three critical questions that shift the focus from the technology to its business utility and trustworthiness.
1. Is it grounded?
Does this AI capability solve my firm's specific and tangible operational problems? A grounded AI is designed to address your business's everyday friction points and strategic challenges, not to demonstrate a futuristic concept.
Look for concrete examples that reflect your team's daily workflows. For instance, can the AI automate tedious but critical administrative tasks? A query like, "Complete my timesheet of last week," is not revolutionary, but it saves time and improves compliance, delivering tangible value every week. Can it provide proactive risk management? A prompt that says, "Show me all projects at risk," addresses potential margin erosion or client dissatisfaction.
These grounded use cases demonstrate a vendor's deep understanding of your business. They are committed to practical application over speculative promises. They are the first and most important sign that an AI offering is built for business value, not just for a press release.
2. Is it explainable?
Trust requires verification for any professional services leader, particularly in finance and operations. If an AI system recommends a course of action—flagging a project for potential margin erosion, for example—you need to know why. An unexplainable, "black box" recommendation is useless in business because it cannot be audited, validated, or confidently acted upon.
Therefore, the second question is: Can you show me how the AI reached that conclusion? This is the core of Explainable AI (XAI). An explainable system can reveal the data and logic behind its outputs. VOGSY, for example, is committed to building a system where users can ask the AI Assistant how it came to a particular recommendation and see the underlying instructions from the AI management system that guided its analysis.
This transparency is non-negotiable for enterprise-grade AI. It is the foundation of user adoption—teams will not rely on tools they don't understand. More importantly, it is the foundation of accountability. When a system is explainable, the human remains in the loop and in control, using the AI's output as an informed suggestion, not a blind directive.
3. Is it governed?
The final, perhaps most strategic, question is: What framework ensures the AI operates safely, securely, and without bias? An AI feature without a robust governance framework is an unmanaged risk. It's like giving a powerful new tool to your team without any training, safety protocols, or oversight.
A governed AI operates within a formal AI Management System (AIMS)—a set of documented policies, processes, and controls that dictate how AI is designed, deployed, and maintained.
This system provides the essential guardrails for enterprise AI. It ensures that the AI only uses secure, managed data sources, preventing the "hallucinations" that can arise from accessing unvetted information. It enforces existing user permissions, guaranteeing that the AI cannot be used to circumvent data access controls.
As explored in the CFO's guide to AI risk management and governance, this governance layer transforms AI from a potential liability into a controlled, auditable asset. It shows the vendor's commitment to responsibility and security.
From tactical features to strategic capability
Applying this three-part framework—grounded, explainable, and governed—reveals a crucial distinction. The market is full of vendors that are adding AI as a tactical feature. However, a truly valuable AI implementation represents a strategic shift in the vendor's organization.
A vendor committed to responsible AI is not just adding a new software layer. They are fundamentally changing how their company operates to build a complete, responsible AI system from the ground up. This involves a deep investment in creating an AI management system AIMS, prioritizing a pragmatic, "glass box" approach, and focusing on tangible use cases that deliver immediate value.
This level of commitment lays the foundation for a move from a single source of truth—the traditional promise of ERP—to a single source of intelligence—intelligence that is valuable, reliable, understandable, and securely managed.
Conclusion
The actual value of AI in project-based ERP will not be found in futuristic promises or flashy demonstrations. It will be found in the pragmatic application of technology to solve the concrete, everyday challenges of running a professional services firm. It will be delivered by vendors who prioritize governance over hype and who build their systems on a foundation of trust and transparency.
For leaders navigating this complex landscape, the path forward is clear. Demand that your partners provide solutions grounded in your reality, explainable in their logic, and governed by a framework you can trust; ISO 42001. By doing so, you can ensure that your investment in AI is not an exercise in chasing hype, but a strategic step toward building a more efficient, predictable, and profitable business.
Continue reading
Frequently asked questions
What are the key signs of "AI hype" in enterprise software?
The most common signs are vague claims about "revolutionizing your business" without connection to specific outcomes, focusing on technology for its own sake, and adding AI features that don't solve a real, validated customer problem.
What does it mean for an AI feature to be "grounded"?
A grounded AI feature is one that solves a real, specific, and tangible operational problem. For example, a command like "Complete my timesheet of last week" is grounded because it addresses a concrete, everyday friction point and delivers immediate, measurable value
Why is "explainability" a non-negotiable feature for business AI?
Because trust requires verification. If an AI flags a project risk, a leader needs to know why to validate the recommendation and act on it confidently. An unexplainable "black box" recommendation cannot be audited and is therefore useless in a professional business context.
How can I tell if a vendor is truly committed to "governed" AI?
Look for evidence of a formal AI Management System (AIMS)—a documented set of policies and controls. The gold standard is third-party certification to an internationally recognized standard like ISO 42001, which provides verifiable proof of their commitment.
Isn't all "AI-powered" software basically the same?
Not at all. There is a fundamental difference between a vendor adding a tactical "AI feature" and a partner who has built a strategic, governed AI capability on a foundation of trust, security, and transparency. The latter represents a change in the company's entire operating philosophy.
Mark van Leeuwen
