Show Me What You’re Looking For
Visualizing Abstracted Transformer Attention for Enhancing Their Local Interpretability on Time Series Data
Keywords:Transformers, Multi-Head Attention, Attention, Interpretability, Abstraction, Visualisation, Time Series
AbstractWhile Transformers have shown their advantages considering
their learning performance, their lack of explainability
and interpretability is still a major problem.
This specifically relates to the processing of time series,
as a specific form of complex data. In this paper,
we propose an approach for visualizing abstracted information
in order to enable computational sensemaking
and local interpretability on the respective Transformer
model. Our results demonstrate the efficacy of
the proposed abstraction method and visualization, utilizing
both synthetic and real world data for evaluation.