Microsoft researchers have developed On-Policy Context Distillation (OPCD), a training method that permanently embeds ...
MIT introduces Self-Distillation Fine-Tuning to reduce catastrophic forgetting; it uses student-teacher demonstrations and needs 2.5x compute.
Trustworthy AI isn’t just about predicting the right outcome; it’s about knowing how confident we should actually be.
Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. The technique aims to help users know ...
To prepare for extreme heat waves around the world -- particularly in places known for cool summers -- climate-simulation models that include a new computing concept may save tens of thousands of ...
Researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL) have developed a dynamic modeling method that uses machine learning to provide accurate simulations of grid behavior and ...