Abstract: We introduce LRP-QViT, an explainability-driven approach for mixed-precision bit allocation in Vision Transformers (ViTs). Our method assigns different bit widths to layers based on their ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results