ChatBetter

Thank you for participating in our research preview.

Last summer, over 30,000 people used ChatBetter to explore whether interacting with multiple LLMs at once could produce more trustworthy results.

ChatBetter

Together, we spent billions of tokens finding better ways to use LLMs. The best answers didn't come from any single model — they came from routing questions to the right model for the task, grounding responses in source data, and letting agents judge and combine outputs automatically.

What we learned about multi-model orchestration, quality guardrails, and trustworthy AI output is shaping everything we build next.

We couldn't have learned any of it without you. Every piece of feedback, every bug report made this research better. Thank you for sharing your curiosity with us.

With gratitude,

Noah, Tyler, and the Unsupervised team

What we're building now

Unsupervised

Long-running agentic data analysis for your entire dataset

DeepWork

Automate any long-running process with Claude Code

Open source alternatives inspired by ChatBetter

OpenChatBetter

Send prompts to multiple models, compare responses side-by-side

TokenWar

Terminal-based LLM arena with split-pane TUI and LLM-as-judge scoring