Skip to main content
Communication and Computing Systems Lab
CCSL
Communication and Computing Systems Lab
Home
People
All Profiles
Principal Investigator
Postdoctoral Fellows
Research Scientists
Research Staff
Students
Alumni
Former Members
Research
Wireless Communication
Body Area Network
AI Accelerator
All Projects
Publications
Publications
Google Scholar
DBLP
IEEE Xplore
KAUST Repository
ORCID
Events
Media Gallery
Contacts
Join us
interpretation faithfulness
Improving Interpretation Faithfulness for Transformers
Di Wang, Assistant Professor, Computer Science
Nov 20, 11:30
-
12:30
B9 L2 H2 H2
transformers
nlp
interpretation faithfulness
Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such as random seeds and slight perturbation of input or embedding vectors, which impedes it from becoming a faithful explanation tool. Thus, a natural question is whether we can find some substitute of the current attention which is more stable and could keep the most important characteristics on explanation and prediction of attention.