Author: Meng, Changping; Chen, Muhao; Mao, Jie; Neville, Jennifer
Title: ReadNet: A Hierarchical Transformer Framework for Web Article Readability Analysis Cord-id: fymgnv1g Document date: 2020_3_17
ID: fymgnv1g
Snippet: Analyzing the readability of articles has been an important sociolinguistic task. Addressing this task is necessary to the automatic recommendation of appropriate articles to readers with different comprehension abilities, and it further benefits education systems, web information systems, and digital libraries. Current methods for assessing readability employ empirical measures or statistical learning techniques that are limited by their ability to characterize complex patterns such as article
Document: Analyzing the readability of articles has been an important sociolinguistic task. Addressing this task is necessary to the automatic recommendation of appropriate articles to readers with different comprehension abilities, and it further benefits education systems, web information systems, and digital libraries. Current methods for assessing readability employ empirical measures or statistical learning techniques that are limited by their ability to characterize complex patterns such as article structures and semantic meanings of sentences. In this paper, we propose a new and comprehensive framework which uses a hierarchical self-attention model to analyze document readability. In this model, measurements of sentence-level difficulty are captured along with the semantic meanings of each sentence. Additionally, the sentence-level features are incorporated to characterize the overall readability of an article with consideration of article structures. We evaluate our proposed approach on three widely-used benchmark datasets against several strong baseline approaches. Experimental results show that our proposed method achieves the state-of-the-art performance on estimating the readability for various web articles and literature.
Search related documents:
Co phrase search for related documents- accuracy increase and logistic regression: 1, 2, 3
- accuracy increase and long lstm short term memory: 1
- accuracy increase and lstm short term memory: 1
- logistic regression and long lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- logistic regression and long range: 1, 2, 3, 4, 5, 6, 7
- logistic regression and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- logistic regression and machine learn: 1
- long lstm short term memory and lstm recurrent model: 1, 2, 3, 4, 5, 6
- long lstm short term memory and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73
Co phrase search for related documents, hyperlinks ordered by date