Abstract: Chain-of-thought distillation (CoT-distillation) aims to endow small language models (SLMs) with reasoning ability to improve their performance toward specific tasks by allowing them to ...
aThe Windreich Department of Artificial Intelligence and Human Health, Mount Sinai Health System, New York, NY, USA bThe Hasso Plattner Institute for Digital Health at Mount Sinai, Mount Sinai Health ...
As large language models (LLMs) gain momentum worldwide, there’s a growing need for reliable ways to measure their performance. Benchmarks that evaluate LLM outputs allow developers to track ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results