User Tools

Site Tools


en:start

THE THREE AIs — WIKI

Machine Learning · Generative AI (RAG-LLM) · Expert Systems

How they differ, how they interact, and why all three matter in high-stakes decision support.


1. Overview

Modern AI is not a single technology.

At amsafis, decision-support systems combine three distinct forms of Artificial Intelligence, each with a different logic, different strengths and different operational requirements:

  1. Machine Learning (ML) — learns predictive patterns from data.
  2. Generative AI with RAG (Retrieval-Augmented Generation) — interprets unstructured documents and provides natural-language reasoning grounded in evidence.
  3. Expert Systems (ES) — implement transparent, auditable rules for safety-critical or regulated logic.

Each of these is an AI in its own right. None replaces the others.

Together, they form a closed loop:

data → prediction → rules → evidence → decision


2. Machine Learning

Predictive Modelling from Structured Data

Machine Learning extracts patterns, interactions and signals from structured datasets—typically spreadsheet-like tables, not images, audio or video.

amsafis specialises precisely in this domain: numeric or well-coded variables, possibly sparse, noisy or heterogeneous.


2.1 What ML needs: structured datasets

ML requires a dataset from which to learn. In real practice:

  • rows = cases (patients, assets, events, transactions)
  • columns = variables (numeric, ordinal, coded categorical)
  • optional time dimension = functional or longitudinal data

At amsafis, ML does not involve:

  • image learning
  • audio learning
  • computer vision
  • raw text embedding without structure

(These domains belong to deep-learning specialisations outside the scope of the consultancy.)


2.2 When formal statistics matter, and when they don’t

Many ML workflows include classical statistical testing:

  • chi-square
  • t-tests, F-tests
  • Fisher exact test
  • likelihood ratio tests

These are useful for scientific validation, but not strictly required for integrating new knowledge into:

  • Generative AI systems (RAG-LLM)
  • Expert Systems

For these two pillars, descriptive statistics often suffice:

  • frequencies and cross-tabs
  • means, SD, quantiles
  • simple associations

If an organisation already has BI tools (Power BI, Tableau, SQL analytics, ERP reports), these descriptive results can be used immediately to feed:

  • a rule-based model, or
  • a RAG repository

without requiring a full inferential modelling project.

This avoids unnecessary delays and makes AI adoption practical even with modest internal analytics resources.


2.3 Linear vs non-linear: why both matter

Linear models

They can be computed not only from full datasets, but sometimes from summary statistics (SSD):

  • number of cases
  • means and SD
  • correlations
  • covariance matrix

This enables:

  • linear regression
  • mediation
  • SEM (structural equation models)
  • PLS (partial least squares)

It allows modelling without sharing raw datasets.

But SSD can only encode linear relationships—everything is summarised. SSD cannot retain:

  • non-linear interactions
  • heterogeneity
  • thresholds
  • local effects
  • latent clusters

Non-linear ML

To detect these phenomena you must use the full dataset.

This is where modern ML excels:

  • additive models
  • tree-based models
  • kernel methods
  • functional data analysis
  • hybrid mechanistic-statistical models

SSD cannot express non-linearity. Therefore:

  • linear ≠ enough
  • linear ≠ realistic for biological, operational or industrial data
  • linear ≠ future-proof

2.4 After ML: where its results go

ML results are inputs to other components:

  • Expert Systems

Rules can incorporate ML-derived thresholds, clusters or risk categories.

  • RAG-LLM

ML results can be documented and fed to the evidence repository.

  • Optimisation (not AI)

If needed, ML outputs can be used in mathematical optimisation (e.g., linear programming). An Expert System can also call optimisation logic internally.


3. Generative AI (RAG-LLM)

Retrieval-Augmented Reasoning over Documents

Generative AI is the most recent AI paradigm, widely known since the release of ChatGPT. Unlike ML and ES, it uses language modelling rather than structured data or explicit rules.

But LLMs are not inherently reliable. They require grounding.

This is why amsafis uses RAG as the mandatory architecture:

  • retrieve documents or fragments
  • supply them to the model
  • generate answers explicitly tied to evidence

3.1 Two deployment modes

(A) API-based deployment

  • Uses cloud LLMs (OpenAI, Anthropic, etc.)
  • Low cost
  • Minimal infrastructure
  • Adequate for public-information chatbots, manuals, product information, etc.

(B) Local deployment

Required when documents cannot leave the premises (healthcare, regulated industry, R&D).

This requires:

  • local hardware (GPU)
  • local LLM model (Mistral-7B, Llama-3 8B…)
  • local retrieval index
  • local knowledge base

The model:

  • does not need universal knowledge
  • does not contact external servers
  • can run offline
  • only needs enough capacity to interpret the company’s own documents

If a company is not in aerospace, the model never needs to know anything about the solar system. Its value lies in RAG, not in encyclopedic knowledge.


3.2 The role of prompts vs. the role of the knowledge base

A good RAG system depends on two delicate components:

  1. User prompts — how the end-user formulates the question
  2. Written knowledge — how the organisation writes and structures documents

These must *match*.

At amsafis, the knowledge base is reviewed manually before indexing to prevent:

  • ambiguous definitions
  • overlapping rules
  • unintended interpretations
  • lexical traps that produce hallucinations

This is equivalent to *prompt engineering on the data side*.


4. Expert Systems

Transparent, auditable reasoning encoded as rules

Expert Systems are the first branch of AI formally established as such, and they remain irreplaceable in safety-critical environments.

They execute explicit logic, not statistical inference and not neural-language reasoning.


4.1 Why Expert Systems still matter

They can handle:

  • hundreds of rules
  • multi-step logic
  • exceptions
  • safety conditions
  • protocol enforcement
  • reasoning that would overwhelm humans

A clinician, engineer or operator may suspect a specific inference path. An Expert System can be programmed to produce:

  • confirmation
  • counterexamples
  • traceable explanations

This provides an audit trail that RAG-LLMs cannot guarantee.


4.2 Working with data

Expert Systems can:

  • read large datasets (thousands of rows)
  • apply rules to each case
  • generate flags, alerts, categories

These records are not rules themselves. They are evaluated by rules.


4.3 Interaction with ML and RAG

  • ML produces quantitative patterns → ES can incorporate them as rule thresholds or decision nodes.
  • RAG produces narrative knowledge → ES can use it as domain descriptions or contextual constraints.
  • ES produces systematised logic → RAG can cite it; ML can be used to test it.

This closes the loop between the three AIs.


5. The Three AIs Together

Why a single approach is never enough

  • ML provides prediction.
  • ES provides explanation.
  • RAG-LLM provides understanding of documents.

In real organisations, high-stakes problems require all three:

  • ML finds the signal
  • ES encodes the decision logic
  • RAG explains, documents and supports human interpretation

This tripartite architecture is the core of amsafis.

en/start.txt · Last modified: by admin

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki