
Motivation
The deep learning usually requires a huge size of training data to improve the performance.
However, a traning with such a huge data needs high computational cost in terms of both computational power and time.
It is very tough especially for beginners.
In practice, it is hard to collect a huge number of annotated training samples.
I think that 1,000 samples are minimum number for the training of the network.
The training with 1,000 samples also includes technical challenges. One of them is to improve generalization performance while avoiding the over fitting.
Let’s enjoy train with 1000.
Regulations
Train with 1,000 samples.
The 1,000 training samples of mnist, cifar10, and cifar100 can be obtained with the reference code. Please use those training samples.
Evaluate with test samples of original datasets.
Provide reproduction codes of evaluation and training. Evaluation code only is insufficient. The stochastic traning cannot be perfectly reproduced. But, the training reproduction code helps a lot to understand.
Don't use the test data for the training stoping criteria and to select the model.
Data augmentations are OK.
Data generations are OK. But the data generator should be also trained with 1,000 samples.
The transfer learning or to use pre-trained network is NG.
Ensemble is OK. But each network should be trained with 1,000 samples.
FAQ
Reference code
The reference code is here, which includes obtaining 1,000 samples and sample network training for each dataset.
Submission
Please submit your results of train with 1000!
Links
Copyright© Okutomi-Tanaka Lab All Rights Reserved.