Add mermaid diagrams for infrastructure and roadmap visualizations

This commit is contained in:
salah 2026-01-28 11:23:07 +01:00
parent 863224fda3
commit 15b11cb180
2 changed files with 97 additions and 0 deletions

View File

@ -100,3 +100,55 @@
* Predictable performance
* Low-latency data access
* Independence from cloud providers
```mermaid
flowchart TB
subgraph NET[Internet]
W[Web/Clients]
end
subgraph EDGE[Edge]
DNS[DNS: betelgeusebytes.io\nA/AAAA -> Ingress IP]
LE[cert-manager\nLet's Encrypt]
ING[Ingress-NGINX]
DNS --> ING
LE --> ING
W --> DNS
end
subgraph K8S
direction TB
subgraph N1[Node 1]
CP[control-plane + worker]
PV1[(local SSD PVs)]
end
subgraph N2[Node 2]
WK[worker + storage-heavy]
PV2[(local SSD PVs)]
end
subgraph NS
AI[ai: LLM, TEI, Label Studio]
VEC[vec: Qdrant]
GRAPH[graph: Neo4j]
DB[db: Postgres, Redis]
BROKER[broker: Kafka]
STORE[storage: MinIO]
MLOPS[ml/mlops: MLflow, Argo WF, Jupyter]
OBS[observability: Grafana/Prom/Loki/Tempo/Alloy]
DEV[devops: ArgoCD, Gitea]
HAD
end
CP --- WK
PV1 --- DB
PV2 --- STORE
PV2 --- OBS
PV2 --- GRAPH
PV2 --- VEC
end
ING -->| host routing| NS
ING -.TCP (optional).- DB
ING -.Bolt (optional).- GRAPH

View File

@ -24,3 +24,48 @@
* GPU nodes
* vLLM integration
* Multiproject tenancy
```mermaid
sequenceDiagram
autonumber
participant User as User/Client
participant Admin as Admin UI (optional)
participant KC as Keycloak (OIDC)
participant OR as Orchestrator (FastAPI)
participant TEI as TEI Embeddings
participant QD as Qdrant
participant N4 as Neo4j
participant LLM as LLM (Ollama/llama.cpp)
participant PG as Postgres
participant S3 as MinIO (S3)
User->>Admin: Open app / submit query (optional)
Admin->>KC: OIDC login
KC-->>Admin: ID/Access token
Admin->>OR: Request + Bearer token
OR->>KC: Verify token / JWKS
KC-->>OR: OK
OR->>PG: Load user/session/config
PG-->>OR: context
OR->>TEI: Embed query/text
TEI-->>OR: vector
OR->>QD: Vector search (hadith/bios)
QD-->>OR: top-k passages/IDs
OR->>N4: Graph query (narrator chain / relations)
N4-->>OR: chain/edges
OR->>LLM: Prompt + retrieved context + schema
LLM-->>OR: JSON extraction / reasoning result
OR->>S3: (optional) store artifacts/provenance
S3-->>OR: stored
OR-->>Admin: Response (JSON + citations/provenance)
Admin-->>User: Render result + review tools