Abstract
This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead of the more traditional multi-layered perceptron. Our experiments on both the multi-lingual and cross-lingual data show that XLM-RoBERTa Large, unlike the Base version, seems to be able to more effectively transfer learning in a few-shot setting and that the k-nearest neighbors classifier is indeed a more powerful classifier than a multi-layered perceptron when used in few-shot learning.
Original language | English |
---|---|
Title of host publication | Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) |
Editors | Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu |
Publisher | Association for Computational Linguistics, ACL |
Pages | 738-742 |
Number of pages | 5 |
ISBN (Electronic) | 9781954085701 |
ISBN (Print) | 9781954085701 |
DOIs | |
Publication status | Published - Aug 2021 |
Event | 15th International Workshop on Semantic Evaluation, SemEval 2021 - Virtual, Bangkok, Thailand Duration: 5 Aug 2021 → 6 Aug 2021 |
Publication series
Name | Lexical and Computational Semantics and Semantic Evaluation (formerly Workshop on Sense Evaluation) (SemEval) |
---|
Conference
Conference | 15th International Workshop on Semantic Evaluation, SemEval 2021 |
---|---|
Country/Territory | Thailand |
City | Virtual, Bangkok |
Period | 5/08/21 → 6/08/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics.
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Computer Science Applications
- Theoretical Computer Science