简体   繁体   中英

Is it possible in deep learning to train on a subset of training set in order to find the best hyper-parameters?

In classic machine learning, it is not uncommon to do a search for hyper-parameters by training different configurations on a small subset of training set. Usually, for each set of hyper-parameters, a k-fold cross validation is done over a small subset of training set. However, in deep learning, models are usually very hungry of data.

So, my question is that do you think is it still possible to use the same strategy in deep learning? What is your experience?

Yes, but as you noticed, the deep learning models usually work best with large samples. So your subset would need to be large as well. With insufficient data, the model would underperform and wouldn't help for hyperparameter tuning.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM