Xu Guo et al. [4] | Model: BERT with Latent-Optimization MethodDataset: iSarcasm, Ptacek | F1 Score:iSarcasm: 34.92%SemEval-18: 66.26%Limitation: no data augmentation was performed to balance the iSarcasm dataset. |
Amirhossein et al. [16] | Model: BERT-based DataAugmentation: word removalDataset: iSarcasm | F1 Score:iSarcasm: 41.4%Limitation: data augmentation only use word removal, thus reducing sentence quality and potentially remove sarcastic nature of sentence |
Our proposed method | Model: BERT, RoBERTa, DistilBERT | F1 Score:iSarcasm: 40.44%Ghosh: 81.08%Ptacek: 87.41%SemEval-18: 67.46%Novelty: data augmentation using GloVe word embedding, data duplication technique and deeper analysis of data augmentation results |