
Powerful reasoning models can be trained by scaling data, verifying reasoning traces, and scaling model size. Releasing OpenThinker-32B, a state-of-the-art open-data reasoning model
If you can't measure it, you can't improve it. Releasing reasoning benchmarks into our model evaluation tool Evalchemy
Open Thoughts, an open-source effort to curate the best open reasoning datasets
We trained Bespoke-Stratos-32B, our reasoning model distilled from DeepSeek-R1 and using Berkeley NovaSky’s Sky-T1 data pipeline. The model outperforms Sky-T1 and o1-preview in reasoning (Math and Code) benchmarks, and almost reaches the performance of DeepSeek-R1-Distill-Qwen-32B while being trained on 47x fewer examples.
AI hallucinations can derail accuracy, but Bespoke's latest factuality model is designed to combat that. Learn how advanced checks are helping models deliver more reliable outputs, reducing common errors in data generation.