Skip to main content

TeaRAGs vs Alternatives

A detailed comparison of TeaRAGs with other codebase semantic search solutions. This table compares only implemented functionality — no roadmaps or promises. Every cell links to evidence.

Legend:

  • ✅ — supported and confirmed by code/architecture
  • ⚠️ — partial / optional / not core
  • ❌ — not supported
  • 🧠 — supported through architecture (not a single feature)
  • 🚫 — architecturally absent

At a Glance

TeaRAGsclaude-contextserenarag-code-mcpDocRAGgrepai
Purpose🧠 Semantic search for code generation and analysis🔍 Semantic code search🛠 Symbol-level tools via LSP🔍 Local code RAG📄 Documentation RAG🔍 Semantic code search + call graphs
MCP-native✅ Go MCP SDK✅ mcp-go

Infrastructure

CriterionTeaRAGsclaude-contextserenarag-code-mcpDocRAGgrepai
Local execution✅ Ollama + Qdrant⚠️ cloud-first default, local possible✅ local LSP✅ Ollama + Qdrant✅ sentence-transformers + LanceDB✅ 100% local
Cloud dependency❌ cloud optional⚠️ default, not required⚠️ optional for smart scraping❌ with Ollama
Embedding model✅ Ollama-first⚠️ multi-provider: OpenAI, VoyageAI, Gemini, Ollama🚫 not embeddings-based✅ Ollama-only⚠️ sentence-transformers family✅ Ollama-first
GPU path🧠 batching + concurrency❌ infra-delegated🚫❌ sequential requests⚠️ sequential for Ollama, parallel for OpenAI

Search Capabilities

CriterionTeaRAGsclaude-contextserenarag-code-mcpDocRAGgrepai
Semantic code search✅ hybrid BM25 + dense✅ LSP-semantic, not NLP-semantic✅ vector + hybrid❌ docs only
Documentation search✅ Markdown AST⚠️ Markdown via AST splitter⚠️ regex across all files⚠️ Markdown-only via search_docs✅ core purpose⚠️ text chunks, no format awareness
AST / structural parsing✅ tree-sitter code + markdown⚠️ tree-sitter for chunking only🧠 LSP symbol graph⚠️ Go/PHP AST, Python regex❌ langchain text-splitters⚠️ tree-sitter for call graphs only
Reranking🧠 hybrid (BM25 + RRF + signals)🧠 hybrid BM25 + dense with RRF⚠️ implicit LSP ordering⚠️ cosine + hardcoded hybrid weights❌ basic vector similarity⚠️ cosine + optional RRF + path boost
Git-aware (blame/churn/age)⚠️ git diff via shell, not metrics❌ planned in roadmap❌ gitignore only

Indexing

CriterionTeaRAGsclaude-contextserenarag-code-mcpDocRAGgrepai
Index as first-class object🚫 LSP-based, no persistent index❌ thin wrapper over Qdrant⚠️ status tool, no versioning
Incremental indexing✅ git-delta + fingerprints⚠️ Merkle tree, file-level🚫⚠️ file-level via mtime + hash❌ append-only⚠️ FS-level, not git-delta
Sub-file reindex✅ chunk-level delta❌ file-level🚫❌ file-level❌ full file re-chunk
Stateful model🚫 agent memory exists, not code model

Scale

CriterionTeaRAGsclaude-contextserenarag-code-mcpDocRAGgrepai
Supported project size✅ 1M–10M+ LOC⚠️ claims "millions of lines", no benchmarks⚠️ LSP-dependent, issues on large projects⚠️ no benchmarks published🚫 docs-only⚠️ benchmarked at 155k LOC
Enterprise readiness⚠️ scalable arch, no RBAC/SSO❌ no enterprise features⚠️ Docker, tool restrictions⚠️ privacy yes, features no⚠️ multi-workspace, PG backend
Reindex speed (large repo)✅ seconds–minutes⚠️ incremental exists, no benchmarks🚫⚠️ sequential embedding bottleneck⚠️ fast FS detection, sequential embedding
Local perf strategy🧠 batching + delta + no SaaS⚠️ cloud-first default, local possible🧠 LSP symbol graph⚠️ local Ollama, sequential⚠️ local LanceDB⚠️ local store + content dedup

Summary

TeaRAGs occupies a unique position: it's the only MCP-native solution that combines semantic search, AST-aware chunking, git trajectory enrichment, and enterprise-scale indexing in a single local-first package.

The closest functional competitor is claude-context — it shares hybrid BM25+RRF reranking and tree-sitter AST chunking, but lacks git enrichment, sub-file reindexing, and signal-based reranking presets. For pure local code search, grepai offers call graph tracing and optional RRF hybrid search, but lacks AST-aware chunking for search and git enrichment. For symbol-level analysis, serena takes an LSP-based approach that complements rather than competes with RAG-based search.


Comparison current as of February 2026. Based on publicly available code and documentation. Every cell links to evidence. No dinosaurs were harmed in the making of this table. 🦖