Selected article for: "logistic regression and long short term"

Author: Abdelminaam, D. S.; Neggaz, N.; Gomaa, I. A.; Ismail, F. H.; Elsawy, A. A.
Title: ArabicDialects: An Efficient Framework for Arabic Dialects Opinion Mining on Twitter Using Optimized Deep Neural Networks
  • Cord-id: 8r7azvc2
  • Document date: 2021_1_1
  • ID: 8r7azvc2
    Snippet: The rapid development of tools for communication such as social networks, tweeting and Whatsapp has generated a large mass of important textual data. Also, the COVID-19 pandemic has inflamed social networks, hence the automatic analysis of opinions has become paramount. The purpose of this paper is to analyze Arabic tweets in terms of positivity, negativity, or neutrality.In analyzing the opinions of the Arabic language, a real challenge is encountered, which lies in the use of different dialect
    Document: The rapid development of tools for communication such as social networks, tweeting and Whatsapp has generated a large mass of important textual data. Also, the COVID-19 pandemic has inflamed social networks, hence the automatic analysis of opinions has become paramount. The purpose of this paper is to analyze Arabic tweets in terms of positivity, negativity, or neutrality.In analyzing the opinions of the Arabic language, a real challenge is encountered, which lies in the use of different dialects (Egyptian, Saudian, Maghrebian, Gulfian, Levantine, Syrian $\ldots $ ). In this paper, we introduce two major components: The first employs six machine learning (ML) methods, including Decision Trees (DT), Logistic Regression (LR), k Nearest Neighbors (K-NN), Random Forests (RF), Support Vector Machines (SVM), and Nave Bayes (NB), with the TF-IDF method acting as the feature extraction.While, the second part consists of testing three variants of Deep Learning (DL) based on multiplicative Long Short Term Memory (mLSTM), Long Short Term Memory (LSTM), and Gated Recurrent Unit (GRU) by applying word embedding as the input vector. The experimental study was validated using three Arabic language corpora (TEAD, ATSAD, and ASTD) and two learning modes (Hold out and 10-folds cross validation). The obtained results in terms of Accuracy (ACC), Precesion (PREC), Recall (REC), and F1-score (F1) show a clear performance for DL techniques based on a 10-folds strategy compared to the state-of-the-art. The experiments shown in the paper reveal that the proposed DL models accomplished the best results.

    Search related documents:
    Co phrase search for related documents
    • acc accuracy and long lstm short term memory: 1
    • acc accuracy and lr logistic regression: 1
    • acc accuracy and lstm short term memory: 1
    • acc accuracy and machine learning: 1, 2, 3, 4, 5
    • logistic regression and long lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9
    • logistic regression and lr logistic regression: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    • logistic regression and lr logistic regression dt decision trees: 1
    • logistic regression and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9
    • logistic regression and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    • long lstm short term memory and lr logistic regression: 1, 2, 3, 4, 5
    • long lstm short term memory and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    • long lstm short term memory and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    • long mlstm short term memory and lstm short term memory: 1
    • long mlstm short term memory and machine learning: 1
    • lr logistic regression and lstm short term memory: 1, 2, 3, 4, 5
    • lr logistic regression and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
    • lr logistic regression dt decision trees and machine learning: 1