Machine Learning · Generative AI (RAG-LLM) · Expert Systems
How they differ, how they interact, and why all three matter in high-stakes decision support.
Modern AI is not a single technology.
At amsafis, decision-support systems combine three distinct forms of Artificial Intelligence, each with a different logic, different strengths and different operational requirements:
Each of these is an AI in its own right. None replaces the others.
Together, they form a closed loop:
data → prediction → rules → evidence → decision
Machine Learning extracts patterns, interactions and signals from structured datasets—typically spreadsheet-like tables, not images, audio or video.
amsafis specialises precisely in this domain: numeric or well-coded variables, possibly sparse, noisy or heterogeneous.
ML requires a dataset from which to learn. In real practice:
At amsafis, ML does not involve:
(These domains belong to deep-learning specialisations outside the scope of the consultancy.)
Many ML workflows include classical statistical testing:
These are useful for scientific validation, but not strictly required for integrating new knowledge into:
For these two pillars, descriptive statistics often suffice:
If an organisation already has BI tools (Power BI, Tableau, SQL analytics, ERP reports), these descriptive results can be used immediately to feed:
without requiring a full inferential modelling project.
This avoids unnecessary delays and makes AI adoption practical even with modest internal analytics resources.
They can be computed not only from full datasets, but sometimes from summary statistics (SSD):
This enables:
It allows modelling without sharing raw datasets.
But SSD can only encode linear relationships—everything is summarised. SSD cannot retain:
To detect these phenomena you must use the full dataset.
This is where modern ML excels:
SSD cannot express non-linearity. Therefore:
ML results are inputs to other components:
Rules can incorporate ML-derived thresholds, clusters or risk categories.
ML results can be documented and fed to the evidence repository.
If needed, ML outputs can be used in mathematical optimisation (e.g., linear programming). An Expert System can also call optimisation logic internally.
Generative AI is the most recent AI paradigm, widely known since the release of ChatGPT. Unlike ML and ES, it uses language modelling rather than structured data or explicit rules.
But LLMs are not inherently reliable. They require grounding.
This is why amsafis uses RAG as the mandatory architecture:
Required when documents cannot leave the premises (healthcare, regulated industry, R&D).
This requires:
The model:
If a company is not in aerospace, the model never needs to know anything about the solar system. Its value lies in RAG, not in encyclopedic knowledge.
A good RAG system depends on two delicate components:
These must *match*.
At amsafis, the knowledge base is reviewed manually before indexing to prevent:
This is equivalent to *prompt engineering on the data side*.
Expert Systems are the first branch of AI formally established as such, and they remain irreplaceable in safety-critical environments.
They execute explicit logic, not statistical inference and not neural-language reasoning.
They can handle:
A clinician, engineer or operator may suspect a specific inference path. An Expert System can be programmed to produce:
This provides an audit trail that RAG-LLMs cannot guarantee.
Expert Systems can:
These records are not rules themselves. They are evaluated by rules.
This closes the loop between the three AIs.
In real organisations, high-stakes problems require all three:
This tripartite architecture is the core of amsafis.