Learn how to visualise saliency maps and more

ALPS 2021 tutorial 'Explainability for NLP'

Slides and lab exercises available on Github

Learn how to visualise saliency maps and more

ALPS 2021 tutorial 'Explainability for NLP'

Slides and lab exercises available on Github

We gave a tutorial on ‘Explainability for NLP’ at the first ALPS (Advanced Language Processing) winter school: http://lig-alps.imag.fr/index.php/schedule/ The tutorial introduces the concepts of ‘model understanding’ as well as ‘decision understanding’ and provides examples of approaches from the areas of fact checking and text classification.

Exercises for both model understanding and decision understanding are available here: https://github.com/copenlu/ALPS_2021

Avatar

CopeNLU is a Natural Language Processing research group led by Isabelle Augenstein with a focus on researching methods for tasks that require a deep understanding of language, as opposed to shallow pattern recognition. We are affiliated with the Natural Language Processing Section, as well as with the Pioneer Centre for AI, at the Department of Computer Science, University of Copenhagen.