Selected article for: "batch size and training process"

Author: Ferrando, Javier; Domínguez, Juan Luis; Torres, Jordi; García, Raúl; García, David; Garrido, Daniel; Cortada, Jordi; Valero, Mateo
Title: Improving Accuracy and Speeding Up Document Image Classification Through Parallel Systems
  • Cord-id: 3a6kdh4t
  • Document date: 2020_6_15
  • ID: 3a6kdh4t
    Snippet: This paper presents a study showing the benefits of the EfficientNet models compared with heavier Convolutional Neural Networks (CNNs) in the Document Classification task, essential problem in the digitalization process of institutions. We show in the RVL-CDIP dataset that we can improve previous results with a much lighter model and present its transfer learning capabilities on a smaller in-domain dataset such as Tobacco3482. Moreover, we present an ensemble pipeline which is able to boost sole
    Document: This paper presents a study showing the benefits of the EfficientNet models compared with heavier Convolutional Neural Networks (CNNs) in the Document Classification task, essential problem in the digitalization process of institutions. We show in the RVL-CDIP dataset that we can improve previous results with a much lighter model and present its transfer learning capabilities on a smaller in-domain dataset such as Tobacco3482. Moreover, we present an ensemble pipeline which is able to boost solely image input by combining image model predictions with the ones generated by BERT model on extracted text by OCR. We also show that the batch size can be effectively increased without hindering its accuracy so that the training process can be sped up by parallelizing throughout multiple GPUs, decreasing the computational time needed. Lastly, we expose the training performance differences between PyTorch and Tensorflow Deep Learning frameworks.

    Search related documents:
    Co phrase search for related documents
    • accuracy give and loss function: 1
    • activation function and adam optimizer: 1, 2, 3
    • activation function and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    • activation layer and loss function: 1, 2, 3, 4
    • adam optimizer and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    • adam optimizer and loss function value: 1