Excited to share that we will be presenting our work in person at #NeurIPS2022 !
Interested in leveraging explainability to improve accuracy and robustness? Come check out our poster and chat 🥳
Code: github.com/hila-chefer/Ro
demo: huggingface.co/spaces/Hila/Ro
Quote Tweet
[1/n] Can explainability improve model accuracy? Our latest work shows the answer is yes!
arxiv.org/pdf/2206.01161
github.com/hila-chefer/Ro
We noticed that ViTs suffer from salient issues- their output is often based on supportive signals (background) rather than the actual object
Show this thread


