
Together, we spent billions of tokens finding better ways to use LLMs. The best answers didn't come from any single model — they came from routing questions to the right model for the task, grounding responses in source data, and letting agents judge and combine outputs automatically.
What we learned about multi-model orchestration, quality guardrails, and trustworthy AI output is shaping everything we build next.
We couldn't have learned any of it without you. Every piece of feedback, every bug report made this research better. Thank you for sharing your curiosity with us.
With gratitude,
Noah, Tyler, and the Unsupervised team
What we're building now
Long-running agentic data analysis for your entire dataset
Automate any long-running process with Claude Code
Open source alternatives inspired by ChatBetter
Send prompts to multiple models, compare responses side-by-side
Terminal-based LLM arena with split-pane TUI and LLM-as-judge scoring