database Schema-first RAG memory platform

RAG Plug — schema-first RAG memory layer for teams

A Retrieval-Augmented Generation platform that turns structured documents into controllable, schema-validated memories, with isolated knowledge spaces and a compact API for semantic search.

FastAPI backend + LightRAG + Qdrant, MongoDB for metadata, GitHub sign-in and API key lifecycle management.

What RAG Plug is

RAG Plug is a schema-first RAG platform. Teams define a JSON schema for each memory, choose which text fields are indexed into vector space, and interact with documents through a compact HTTP API. Every memory is owned by a user and isolated at the index level.

  • FastAPI backend + MongoDB + LightRAG + Qdrant.
  • Next.js console that renders PieUI/Piedemo cards served by the backend.
  • GitHub login, JWT-style auth cookie for the console, and API keys for runtime access.

Typical flow

  1. 1.Sign in with GitHub and create a new memory.
  2. 2.Define the JSON schema and select text fields to index.
  3. 3.Choose an embedding provider (OpenAI) per indexed field.
  4. 4.Generate an API key.
  5. 5.Ingest documents into the named memory.
  6. 6.Run semantic search against a specific indexed field with different LightRAG modes.

Applications

Core use cases

Where RAG Plug is the right memory and retrieval layer.

library_books

Internal knowledge bases

Typed records enforced by JSON schema, strict validation on ingest, and semantic search over the right text fields.

support_agent

Support & operations

Ticket, incident, and runbook stores with fast search over curated text fields instead of a noisy full-dump index.

article

Product & documentation search

Product artifacts and documentation with controlled structure, ready for agents and applications to consume.

smart_toy

Memory for agents

Per-user / per-memory isolation for agentic systems and automation jobs, without cross-memory contamination.

tenancy

Multi-tenant RAG

Clean separation of knowledge spaces by user and project on top of a shared infrastructure stack.

memory

Memory infrastructure for products

A lightweight way to add schema, indexes, and RAG search to existing products without building your own retrieval stack.

Technical Details

Architecture

Backend. FastAPI service with modules for users, memories, and API keys, plus a LightRAG service and Qdrant-backed retrieval. MongoDB stores users, API keys, memory definitions, schemas, and index metadata.

Frontend. Next.js 16 + React 19 shell that renders PieUI/Piedemo cards from the backend and provides navigation for Dashboard, Install, API Keys, Memories, and Pipelines.

Inference layer. LightRAG orchestrates ingest and search workflows, Qdrant stores embeddings. Supported embeddings: OpenAI text-embedding-3-small, text-embedding-3-large, and text-embedding-ada-002.

Minimal API surface

  • GET /version
  • POST /memory/<memory_name>
  • GET /memory/<memory_name>/schema
  • DELETE /memory/<memory_name>/<item_id>
  • POST /search/<memory_name>

Authentication via X-API-Key, Authorization: Bearer, or an api_key query parameter.

Search example
POST /search/my-memory
{
"query": "capital of France",
"field_name": "content",
"top_k": 5,
"mode": "naive",
"enable_rerank": false
}

Get Started

Interested in piloting RAG Plug?

Tell us what data you have and which products or agents you are building — we will help design the memory schemas, indexes, and integration flow.

mail
info@swarm.ing
location_on
Global Distributed Operations
RAG Plug integration
RAG Plug integration
Schema design consultation
Enterprise deployment
Other