Bigger models won't fix contradiction drift.
Weighted trust decay will.
SVTD (Surgical Vector Trust Decay) adds trust-weighted memory and contradiction handling so your RAG / agents improve with real user corrections— without hard-deleting history.
- Contradictions accumulate (the system confidently flips)
- User corrections don't stick (or get overwritten later)
- Stale "truth" persists (old memories win because they're retrieved)
- Teams add hacks (prompts, filters, "don't answer" rules) forever
https://www.loom.com/share/8d979fe7fa3b43889f9e18b86b7446e4
Accuracy is a snapshot. Trust is a timeline.
Benchmarks measure whether an answer is correct right now. Production systems fail when answers drift across time, corrections, and conflicting sources.
SVTD treats memory as a living substrate: it can be useful, wrong, corrected, superseded—and therefore must be weighted, not blindly reused.
SVTD in one sentence
When the user corrects the system, future retrieval should prefer the corrected truth— while older conflicting memories decay instead of silently dominating.
Result: less contradiction drift, fewer "why did it say the opposite yesterday?", more deployable assistants.
Think of SVTD as version control for facts: old memories stay in history, but retrieval always points to the latest resolved truth.
How it works
1) Trust score per memory
Every memory chunk carries a trust weight. New signals can raise trust; corrections can suppress trust. No hard deletes required.
2) Contradiction-aware suppression
When two "truths" conflict, SVTD prefers higher-trust / newer-corrected items, and decays conflicting items so they stop winning retrieval.
3) Drop-in retrieval hook
The reference implementation shows where trust weighting belongs in retrieval. SVTD outputs a re-ranked / suppressed memory set your LLM actually sees.
demo_trust_weight_rag.py# MemoryGate / SVTD — Private Beta (Evaluation Only)
Goal:
Make AI systems less wrong over time by handling contradictions + user corrections
without deleting history.
You get:
- Core SVTD trust-weighting system (integrated into a reference RAG implementation)
- demo_trust_weight_rag.py (start here)
- Reference implementation showing where trust weighting belongs in retrieval
You do NOT get:
- Any proprietary "truth file" automation roadmap
- Customer datasets
- Production license
Integration (actual code):
from demo_trust_weight_rag import SimpleRAGWithTrustWeights
# Initialize RAG system with trust weighting
rag = SimpleRAGWithTrustWeights(memory_file_path="memories.jsonl")
# Standard retrieval (trust weighting applied automatically)
results = rag.retrieve("user query", n_results=10)
# User correction flow
rag.flag_memory(memory_id) # Reduces trust weight
rag.approve_memory(memory_id) # Increases trust weight
# Re-query — results reflect updated trust
new_results = rag.retrieve("same query", n_results=10)
If demo_trust_weight_rag.py feels like "finally, someone solved this," you're Customer Zero.
If it feels like "I could rewrite it in 5 minutes," no hard feelings — it's not your pain.
Who this is for
For
- Builders shipping RAG or agent systems into real users
- Teams seeing contradiction drift / "it changed its mind" failures
- Apps where user correction must persist (support, ops, compliance, knowledge work)
- Engineers who want to inspect code, not trust a black box
Not for
- Pure prompt-hack workflows with no retrieval layer
- Hobby chatbots where being wrong has no cost
- Teams that want a hosted API without code review
- People looking for "one prompt to stop hallucinations"
Get the Private Repo
We're doing a small private beta with builders. You'll get a private GitHub invite and a runnable demo. If it resonates, we'll talk production licensing.
Tip: if you're evaluating with a team, send one request and list all GitHub handles.
If these are true, SVTD will likely help:
- Your assistant contradicts itself across days/weeks
- User corrections don't reliably persist
- Stale docs keep winning retrieval
- You've added "guardrails" and still feel uneasy
SVTD Private Beta License (plain English)
Evaluation-only. This repo is provided solely for internal testing and evaluation.
- No production use
- No redistribution
- No derivative commercial systems
Production use requires a written license agreement from MemoryGate. © MemoryGate. All rights reserved.
(This is intentionally readable. We'll do the formal version when a real customer needs it.)