Researchers from the University of Maryland, Lawrence Livermore, Columbia and TogetherAI have developed a training technique that triples LLM inference speed without auxiliary models or infrastructure ...
This repository archives artifacts (prompts, configs, logs, and scripts) from a series of preprints (more info at https://slashreboot.com) on prompt-induced simulated metacognition and embodiment in ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results