E
Microsoft’s small math AI model does math better than the big boys.
Microsoft found that small language models can exceed the performance of much larger ones when trained to specialize in a single area. Researchers fine-tuned the Mistral 7B model to create Orca-Math, an SLM that solved grade school math problems better, at 86.8 percent, than models like Llama 2, GPT-3.5, and Gemini Pro — even though it ingests significantly less data.
The company was rumored to build a team entirely focused on creating SLMs (AI models that use less computing power), which would be suited for phones and laptops.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Loading comments
Getting the conversation ready...
Most Popular
Most Popular
- Meta’s historic loss in court could cost a lot more than $375 million
- Apple raises the Mac Mini’s starting price
- How the internet’s favorite squirrel dad made the hottest camera app of 2026
- These reusable digital Polaroids are a clever way to cover a fridge in memories
- Anker’s discounted 2-in-1 USB-C cable is a great way to spend $15












