Bad Theory Labs is a research lab and product studio. We build the infrastructure that makes AI genuinely useful inside real work — the memory, the retrieval, the context, and the presence that turns intelligence into something that actually follows through.
We are not a model company. We are not an AI wrapper. We are building the substrate layer that makes AI genuinely useful inside real workflows — the memory, the retrieval, the context, and the ambient presence that connects intelligence to how people actually work.
Research without product is just paper. Product without research is just imitation. We do both — because neither alone is enough.
AI systems need structured, reliable memory across sessions, tools, and workflows. Not a bigger prompt window. A real memory layer — one that stores what matters and retrieves it accurately when it matters.
Systems should not surface everything they detect. They need taste, prioritization, and restraint. The goal is not to be helpful all the time — it is to show up only when the signal is high enough.
AI should not stop at suggestion. It should execute meaningful work. Browse the web, write code, manage files, run tasks end to end. The machine should be able to follow through.
The system should become more useful over time by learning the user's patterns, tools, and repeated tasks — without being explicitly programmed to. Intelligence that compounds.
The common pattern: bolt AI onto existing software. Optimize for engagement. Ship a demo. Call it an agent. We have no interest in that.
The current generation of AI products is limited in predictable ways. They forget too much. They retrieve the wrong context. They interrupt without judgment. They can generate, but rarely follow through.
“The goal of Bad Theory Labs is not to ship isolated AI features.
The goal is to help define a new interface to computing.”
Two products. One thesis. Memory and agency — the two things that make AI useful inside real work.
Marrow lives on your laptop. It watches what you do, builds up a picture of your work, and stays completely silent — until the moment doing something is actually worth more than the interruption. Then it doesn't suggest. It acts.
Persistent memory and grounded retrieval for AI agents. The highest preference recall on every public benchmark. Zero hallucinations on stored facts. Three lines of code.
| System | Overall | Single-session | Hallucination |
|---|---|---|---|
| RetainDB | 88% | 0% | |
| Supermemory | — | — | |
| Zep | — | — | |
| GPT-5 baseline | — | — | 95.5% |
The field has been optimizing the wrong objective for a decade. Next-token prediction induces compression as a side effect. We are investigating what happens when compression is the objective — not a byproduct.
“Intelligence is compression. Not metaphorically — mechanically.”Read the full program document →
Looking for investors who share conviction about where intelligence is going — not just capital.
Pre-seed · early 2025 · hello@badtheorylabs.com
Not a checklist. A real filter. If at least one of these resonates — we should talk.
We reply to every message. No deck required to start — a paragraph about why you're interested is enough.