简体   繁体   中英

Does the simple parameters also change during Hyper-parameter tuning

During hyper-parameter tuning do the parameters (weights already learned during model training) are also optimized or are they fixed and only optimal values are found for the hyper-parameters? Please explain.

Short is NO , they are not fixed.

Because, Hyper-parameters directly influence your simple parameters . So for a neural network, no of hidden layers to use is a hyper-parameter, while weights and biases in each layer can be called simple parameter. Of course, you can't make weights of individual layers constant when the number of layers of the network (hyper-parameter) itself is variable . Similarly in linear regression, your regularization hyper-parameter directly impacts the weights learned.

So goal of tuning hyper-parameter is to get a value that leads to best set of those simple parameters . Those simple parameters are the ones you actually care about, and they are the ones to be used in final prediction/deployment. So tuning hyper-parameter while keeping them fixed is meaningless.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM