Nalazite se na CroRIS probnoj okolini. Ovdje evidentirani podaci neće biti pohranjeni u Informacijskom sustavu znanosti RH. Ako je ovo greška, CroRIS produkcijskoj okolini moguće je pristupi putem poveznice www.croris.hr
izvor podataka: crosbi !

Iterative Recursive Attention Model for Interpretable Sequence Classification (CROSBI ID 674276)

Prilog sa skupa u zborniku | izvorni znanstveni rad | međunarodna recenzija

Tutek, Martin ; Šnajder, Jan Iterative Recursive Attention Model for Interpretable Sequence Classification // Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP / Linzen, Tal ; Chrupała, Grzegorz ; Alishahi, Afra (ur.). Brisel: Association for Computational Linguistics (ACL), 2018. str. 249-257

Podaci o odgovornosti

Tutek, Martin ; Šnajder, Jan

engleski

Iterative Recursive Attention Model for Interpretable Sequence Classification

Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art

Natural language processing ; Deep learning ; interpretability

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

Podaci o prilogu

249-257.

2018.

objavljeno

Podaci o matičnoj publikaciji

Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Linzen, Tal ; Chrupała, Grzegorz ; Alishahi, Afra

Brisel: Association for Computational Linguistics (ACL)

Podaci o skupu

EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

poster

01.11.2018-01.11.2018

Bruxelles, Belgija

Povezanost rada

Računarstvo

Poveznice