Effects of data count and image scaling on Deep Learning training

Daisuke Hirahara, Eichi Takaya, Taro Takahara, Takuya Ueda

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


Background. Deep learning using convolutional neural networks (CNN) has achieved significant results in various fields that use images. Deep learning can automatically extract features from data, and CNN extracts image features by convolution processing. We assumed that increasing the image size using interpolation methods would result in an effective feature extraction. To investigate how interpolation methods change as the number of data increases, we examined and compared the effectiveness of data augmentation by inversion or rotation with image augmentation by interpolation when the image data for training were small. Further, we clarified whether image augmentation by interpolation was useful for CNN training. To examine the usefulness of interpolation methods in medical images, we used a Gender01 data set, which is a sex classification data set, on chest radiographs. For comparison of image enlargement using an interpolation method with data augmentation by inversion and rotation, we examined the results of two- and four-fold enlargement using a Bilinear method.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalPeerJ Computer Science
Publication statusPublished - 2020


  • Bicubic
  • Bilinear
  • Deep learning
  • Fashion-MNIST
  • Hamming
  • Image scaling
  • Interpolation
  • Lanczos
  • Medical image
  • Nearest


Dive into the research topics of 'Effects of data count and image scaling on Deep Learning training'. Together they form a unique fingerprint.

Cite this