Author: Kumar, Akshi; Bhatia, M. P. S.; Sangwan, Saurabh Raj
Title: Rumour detection using deep learning and filter-wrapper feature selection in benchmark twitter dataset Cord-id: zy6ms2sn Document date: 2021_8_20
ID: zy6ms2sn
Snippet: Microblogs have become a customary news media source in recent times. But as synthetic text or ‘readfakes’ scale up the online disinformation operation, unsubstantiated pieces of information on social media platforms can cause significant havoc by misleading people. It is essential to develop models that can detect rumours and curtail its cascading effect and virality. Undeniably, quick rumour detection during the initial propagation phase is desirable for subsequent veracity and stance asse
Document: Microblogs have become a customary news media source in recent times. But as synthetic text or ‘readfakes’ scale up the online disinformation operation, unsubstantiated pieces of information on social media platforms can cause significant havoc by misleading people. It is essential to develop models that can detect rumours and curtail its cascading effect and virality. Undeniably, quick rumour detection during the initial propagation phase is desirable for subsequent veracity and stance assessment. Linguistic features are easily available and act as important attributes during the initial propagation phase. At the same time, the choice of features is crucial for both interpretability and performance of the classifier. Motivated by the need to build a model for automatic rumour detection, this research proffers a hybrid model for rumour classification using deep learning (Convolution neural network) and a filter-wrapper (Information gain—Ant colony) optimized Naive Bayes classifier, trained and tested on the PHEME rumour dataset. The textual features are learnt using the CNN which are combined with the optimized feature vector generated using the filter-wrapper technique, IG-ACO. The resultant optimized vector is then used to train the Naïve Bayes classifier for rumour classification at the output layer of CNN. The proposed classifier shows improved performance to the existing works.
Search related documents:
Co phrase search for related documents- absence presence and logistic regression: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63
- absence presence and logistic regression model: 1, 2, 3, 4, 5, 6, 7
- absence presence and long period: 1, 2, 3, 4, 5
- absence presence and low detection rate: 1
- absence presence and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
- accuracy score and activation function: 1, 2, 3
- accuracy score and logistic regression: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38
- accuracy score and logistic regression model: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- accuracy score and low detection rate: 1
- accuracy score and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72
Co phrase search for related documents, hyperlinks ordered by date