DocsSelf-HostingArchitecture Overview Architecture Overview
Opentrace is a full-stack application consisting of four main services, each deployable as a Docker container.
System Architecture
┌─────────────────────────────────────────────────────────────┐
│ Client (Next.js) │
│ docs.opentrace.online │
│ │
│ Landing Page ─ Projects ─ Knowledge Base ─ Chat │
│ │ │
│ Clerk Auth │
└─────────────────────┬───────────────────────────────────────┘
│ HTTP/REST
┌─────────────────────▼───────────────────────────────────────┐
│ Server (FastAPI/Python) │
│ │
│ Routes (REST API) ─ Services ─ Models │
│ │ │
│ ┌─────▼──────┐ ┌───────────────┐ │
│ │ RAG │ │ Agents │ │
│ │ Ingestion │ │ (LangGraph) │ │
│ │ Retrieval │ │ │ │
│ └─────┬──────┘ └───────┬───────┘ │
│ │ │ │
└─────────┼───────────────────┼───────────────────────────────┘
│ │
┌───────▼───────────────────▼───────┐ ┌──────────────┐
│ Supabase (PostgreSQL) │ │ AWS S3 │
│ + pgvector extension │ │ (file │
│ │ │ storage) │
│ users │ projects │ documents │ └──────────────┘
│ document_chunks │ chats │ msgs │
└───────────────────────────────────┘
┌───────────────┐ ┌──────────────┐
│ Redis │ │ Celery │
│ (broker) │◄──►│ Worker │
│ │ │ (async tasks)│
└───────────────┘ └──────────────┘
Service Breakdown
| Service | Technology | Port | Purpose |
|---|
| Client | Next.js 16 + TypeScript | 3000 | Frontend UI — landing page, dashboard, chat interface |
| Server | FastAPI + Python 3.12 | 8000 | REST API — business logic, RAG orchestration, auth verification |
| Celery Worker | Celery + Python | — | Background task processing — document ingestion pipeline |
| Redis | Redis 7 Alpine | 6379 | Message broker for Celery task queue |
External Services
| Service | Purpose |
|---|
| Supabase | PostgreSQL database with pgvector for vector storage, full-text search, and RPC functions |
| AWS S3 | File storage for uploaded documents (presigned URL upload) |
| OpenAI | GPT-4o for chat, GPT-4o-mini for guardrails, text-embedding-3-large for vectors |
| Clerk | Authentication and user management (JWT tokens, webhooks) |
| Tavily | Web search API (optional, for agentic RAG) |
| ScrapingBee | Web page crawling for URL ingestion |
Data Flow
- Upload: Client → presigned URL → S3 → confirm → Celery task → ingestion pipeline → Supabase
- Chat: Client → Server → LangGraph Agent → RAG retrieval (Supabase pgvector) → LLM (OpenAI) → streamed response
- Auth: Client → Clerk → JWT → Server verifies → process request