A novel Auto-ML Framework for Sarcasm Detection

Prof Doc Thesis


Imtiaz, S. 2022. A novel Auto-ML Framework for Sarcasm Detection. Prof Doc Thesis University of East London School of Architecture, Computing and Engineering https://doi.org/10.15123/uel.8q7y9
AuthorsImtiaz, S.
TypeProf Doc Thesis
Abstract

Many domains have sarcasm or verbal irony presented in the text of reviews, tweets, comments, and dialog discussions. The purpose of this research is to classify sarcasm for multiple domains using the deep learning based AutoML framework. The proposed AutoML framework has five models in the model search pipeline, these five models are the combination of convolutional neural network (CNN), Long Short-Term Memory (LSTM), deep neural network (DNN), and Bidirectional Long Short-Term Memory (BiLSTM). The hybrid combination of CNN, LSTM, and DNN models are presented as CNN-LSTM-DNN, LSTM-DNN, BiLSTM-DNN, and CNN-BiLSTM-DNN. This work has proposed the algorithms that contrast polarities between terms and phrases, which are categorized into implicit and explicit incongruity categories. The incongruity and pragmatic features like punctuation, exclamation marks, and others integrated into the AutoML DeepConcat framework models. That integration was possible when the DeepConcat AutoML framework initiate a model search pipeline for five models to achieve better performance. Conceptually, DeepConcat means that model will integrate with generalized features. It was evident that the pretrain model BiLSTM achieved a better performance of 0.98 F1 when compared with the other five model performances. Similarly, the AutoML based BiLSTM-DNN model achieved the best performance of 0.98 F1, which is better than core approaches and existing state-of-the-art Tweeter tweet dataset, Amazon reviews, and dialog discussion comments. The proposed AutoML framework has compared performance metrics F1 and AUC and discovered that F1 is better than AUC. The integration of all feature categories achieved a better performance than the individual category of pragmatic and incongruity features. This research also evaluated the performance of the dropout layer hyperparameter and it achieved better performance than the fixed percentage like 10% of dropout parameter of the AutoML based Bayesian optimization. Proposed AutoML framework DeepConcat evaluated best pretrain models BiLSTM-DNN and CNN-CNN-DNN to transfer knowledge across domains like Amazon reviews and Dialog discussion comments (text) using the last strategy, full layer, and our fade-out freezing strategies. In the transfer learning fade-out strategy outperformed the existing state-of-the-art model BiLSTM-DNN, the performance is 0.98 F1 on tweets, 0.85 F1 on Amazon reviews, and 0.87 F1 on the dialog discussion SCV2-Gen dataset. Further, all strategies with various domains can be compared for the best model selection.

Year2022
PublisherUniversity of East London
Digital Object Identifier (DOI)https://doi.org/10.15123/uel.8q7y9
File
License
File Access Level
Anyone
Publication dates
Online12 Apr 2022
Publication process dates
Submitted01 Apr 2022
Deposited12 Apr 2022
Permalink -

https://repository.uel.ac.uk/item/8q7y9

Download files


File
2022_DProf_Imtiaz.pdf
License: CC BY-NC-ND 4.0
File access level: Anyone

  • 350
    total views
  • 559
    total downloads
  • 1
    views this month
  • 6
    downloads this month

Export as