Table of Contents

THE THREE AIs — WIKI

Machine Learning · Generative AI (RAG-LLM) · Expert Systems

How they differ, how they interact, and why all three matter in high-stakes decision support.


1. Overview

Modern AI is not a single technology.

At amsafis, decision-support systems combine three distinct forms of Artificial Intelligence, each with a different logic, different strengths and different operational requirements:

  1. Machine Learning (ML) — learns predictive patterns from data.
  2. Generative AI with RAG (Retrieval-Augmented Generation) — interprets unstructured documents and provides natural-language reasoning grounded in evidence.
  3. Expert Systems (ES) — implement transparent, auditable rules for safety-critical or regulated logic.

Each of these is an AI in its own right. None replaces the others.

Together, they form a closed loop:

data → prediction → rules → evidence → decision


2. Machine Learning

Predictive Modelling from Structured Data

Machine Learning extracts patterns, interactions and signals from structured datasets—typically spreadsheet-like tables, not images, audio or video.

amsafis specialises precisely in this domain: numeric or well-coded variables, possibly sparse, noisy or heterogeneous.


2.1 What ML needs: structured datasets

ML requires a dataset from which to learn. In real practice:

At amsafis, ML does not involve:

(These domains belong to deep-learning specialisations outside the scope of the consultancy.)


2.2 When formal statistics matter, and when they don’t

Many ML workflows include classical statistical testing:

These are useful for scientific validation, but not strictly required for integrating new knowledge into:

For these two pillars, descriptive statistics often suffice:

If an organisation already has BI tools (Power BI, Tableau, SQL analytics, ERP reports), these descriptive results can be used immediately to feed:

without requiring a full inferential modelling project.

This avoids unnecessary delays and makes AI adoption practical even with modest internal analytics resources.


2.3 Linear vs non-linear: why both matter

Linear models

They can be computed not only from full datasets, but sometimes from summary statistics (SSD):

This enables:

It allows modelling without sharing raw datasets.

But SSD can only encode linear relationships—everything is summarised. SSD cannot retain:

Non-linear ML

To detect these phenomena you must use the full dataset.

This is where modern ML excels:

SSD cannot express non-linearity. Therefore:


2.4 After ML: where its results go

ML results are inputs to other components:

Rules can incorporate ML-derived thresholds, clusters or risk categories.

ML results can be documented and fed to the evidence repository.

If needed, ML outputs can be used in mathematical optimisation (e.g., linear programming). An Expert System can also call optimisation logic internally.


3. Generative AI (RAG-LLM)

Retrieval-Augmented Reasoning over Documents

Generative AI is the most recent AI paradigm, widely known since the release of ChatGPT. Unlike ML and ES, it uses language modelling rather than structured data or explicit rules.

But LLMs are not inherently reliable. They require grounding.

This is why amsafis uses RAG as the mandatory architecture:


3.1 Two deployment modes

(A) API-based deployment

(B) Local deployment

Required when documents cannot leave the premises (healthcare, regulated industry, R&D).

This requires:

The model:

If a company is not in aerospace, the model never needs to know anything about the solar system. Its value lies in RAG, not in encyclopedic knowledge.


3.2 The role of prompts vs. the role of the knowledge base

A good RAG system depends on two delicate components:

  1. User prompts — how the end-user formulates the question
  2. Written knowledge — how the organisation writes and structures documents

These must *match*.

At amsafis, the knowledge base is reviewed manually before indexing to prevent:

This is equivalent to *prompt engineering on the data side*.


4. Expert Systems

Transparent, auditable reasoning encoded as rules

Expert Systems are the first branch of AI formally established as such, and they remain irreplaceable in safety-critical environments.

They execute explicit logic, not statistical inference and not neural-language reasoning.


4.1 Why Expert Systems still matter

They can handle:

A clinician, engineer or operator may suspect a specific inference path. An Expert System can be programmed to produce:

This provides an audit trail that RAG-LLMs cannot guarantee.


4.2 Working with data

Expert Systems can:

These records are not rules themselves. They are evaluated by rules.


4.3 Interaction with ML and RAG

This closes the loop between the three AIs.


5. The Three AIs Together

Why a single approach is never enough

In real organisations, high-stakes problems require all three:

This tripartite architecture is the core of amsafis.