Augmenting Neural Metaphor Detection with Concreteness

Ghadi Alnafesah, Harish Tayyar Madabushi, Mark Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The idea that a shift in concreteness within a sentence indicates the presence of a metaphor has been around for a while. However, recent methods of detecting metaphor that have relied on deep neural models have ignored concreteness and related psycholinguistic information. We hypothesis that this information is not available to these models and that their addition will boost the performance of these models in detecting metaphor. We test this hypothesis on the Metaphor Detection Shared Task 2020 and find that the addition of concreteness information does in fact boost deep neural models. We also run tests on data from a previous shared task and show similar results.
Original languageEnglish
Title of host publicationProceedings of the Second Workshop on Figurative Language Processing
EditorsBeata Beigman Klebanov, Ekaterina Shutova , Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, Debanjan Ghosh
PublisherAssociation for Computational Linguistics, ACL
Pages204-210
ISBN (Print)9781952148125
Publication statusPublished - 9 Jul 2020
Event Second Workshop on Figurative Language Processing (FigLang2020) - Virtual event
Duration: 9 Jul 20209 Jul 2020

Workshop

Workshop Second Workshop on Figurative Language Processing (FigLang2020)
City Virtual event
Period9/07/209/07/20

Cite this