Hello!
As a handsome local AI enjoyerâą youâve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it âbullshitsâ - https://link.springer.com/article/10.1007/s10676-024-09775-5
Iâm autistic and extremely allergic to vibes-based tooling, so ⊠I built a thing. Maybe itâs useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).
Not a model, not a UI, not magic voodoo.
A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: âIn God we trust. All others must bring data.â
Three examples:
1) KB mechanics that donât suck (1990s engineering: markdown, JSON, checksums)
You keep âknowledgeâ as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:
>>attach <kb>â attaches a KB folder>>summ newâ generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
âyo, what did the Commodore C64 retail for in 1982?â
âŠit answers from the attached KBs only. If the fact isnât there, it tells you - explicitly - instead of winging it. Eg:
The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amigaâs pricing and timeline are also not detailed in the given facts.
Missing information includes the exact 1982 retail price for Commodoreâs product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No âwell probablyâŠâ. Just: hereâs whatâs in your docs, hereâs whatâs missing, donât GIGO yourself into stupid.
And when youâre happy with your summaries, you can:
>>move to vaultâ promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the âdeep thinkâ pipeline against your curated sources. Itâs enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker â critic â thinker). Itâs slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.
Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]
Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.
The flow is basically: Attach KBs â SUMM â Move to Vault â Mentats. No mystery meat. No âtrust me bro, embeddings.â
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.
Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesnât become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you donât get VRAM spikes after 400 messages
So instead of:
âRemember my server is 203.0.113.42â â âGot it!â â [100 msgs later] â â127.0.0.1 đ„°â
you get:
!! my server is 203.0.113.42?? server ipâ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.
Thereâs more (a lot more) in the README, but Iâve already over-autismâed this post.
TL;DR:
If you want your local LLM to shut up when it doesnât know and show receipts when it does, come poke it:
- Primary (Codeberg): https://codeberg.org/BobbyLLM/llama-conductor
- Mirror (GitHub): https://github.com/BobbyLLM/llama-conductor
PS: Sorry about the AI slop image. I canât draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.



D) None of the above.
I didnât âsolve hallucinationâ. I changed the failure mode. The model can still hallucinate internally. The difference is itâs not allowed to surface claims unless theyâre grounded in attached sources.
If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isnât âthe model is always right.â
The guarantee is âthe system wonât pretend it knows when the sources donât support it.â Thatâs it. Thatâs the whole trick.
KB size doesnât matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.
Thatâs a control-layer property, not a model property. If it helps: think of it as moving from âLLM answers questionsâ to âLLM summarizes evidence I give it, or says âinsufficient evidence.ââ
Again, thatâs the whole trick.
You donât need to believe me. In fact, please donât. Test it.
I could be wrongâŠbut if Iâm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesnât suck balls as much as you think it might.
Maybe itâs even useful to you.
I dunno. Try it?
So⊠Rag with extra steps and rag summarization? What about facts that are not rag retrieval?
Parts of this are RAG, sure
RAG parts:
So yes, that layer is RAG with extra steps.
Whatâs not RAG -
KB mode (filesystem SUMM path)
This isnât vector search. Itâs deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. Thereâs no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.
If the fact isnât in the attached KB, the router forces a refusal. Put up or shut up.
Vodka (facts memory)
Thatâs not retrieval at all, in the LLM sense. Itâs verbatim key-value recall.
Again, no embeddings, no similarity search, no model interpretation.
âFacts that arenât RAGâ
In my set up, they land in one of two buckets.
Short-term / user facts â Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.
Curated knowledge â KB / Vault. Things you want grounded, auditable, and reusable.
In response to the implicit âwhy not just RAG thenâ
Classic RAG failure mode is: retrieval is fuzzy â model fills gaps â user canât tell which part came from where.
The extra âstepsâ are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.
So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I donât trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know thatâs a weird way to operate maybe (advesarial, assume worst, engineer around issue) but thatâs how ASD brains work.
Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
Huh? That is the literal opposite of what I said. Like, diametrically opposite.
Let me try this a different way.
Hallucination in SUMM doesnât âpoisonâ the KB, because SUMMs are not authoritative facts, theyâre derived artifacts with provenance. Theyâre explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade youâre describing:
The source of truth is still the original document, not the summary. The summary is just a compressed view of it. Thatâs why it carries a SHA of the original file. If a SUMM looks wrong, you can:
a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.
Nothing is âsilently acceptedâ as ground truth.
The dangerous step would be: model output -> auto-ingest into long-term knowledge.
Thatâs explicitly not how this works.
The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that
Donât like a SUMM? Donât push it into the vault. Thereâs a gate between âmodel said a thingâ and âsystem treats this as curated knowledge.â Thatâs you - the human. Donât GI and it wonât GO.
Determinism works for you here. The hash doesnât freeze the hallucination; it freezes the input snapshot. That makes bad summaries:
Which is the opposite of silent drift.
If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.
Thatâs a much easier class of bug to detect and correct. Again: the proposition is not âthe model will never hallucinate.â. Itâs âit canât silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source versionâ.
And that, is ultimately what keeps the pipeline from becoming âpoisonedâ.