Abstract: We introduce LRP-QViT, an explainability-driven approach for mixed-precision bit allocation in Vision Transformers (ViTs). Our method assigns different bit widths to layers based on their ...