[英]Does the simple parameters also change during Hyper-parameter tuning
During hyper-parameter tuning do the parameters (weights already learned during model training) are also optimized or are they fixed and only optimal values are found for the hyper-parameters? 在超参数调整过程中,是否还优化了参数(在模型训练过程中已学习的权重)或参数是否固定,并且仅找到了超参数的最佳值? Please explain.
请解释。
Short is NO , they are not fixed. 总之就是没有 ,他们是不固定的。
Because, Hyper-parameters directly influence your simple parameters . 因为,超参数直接影响您的简单参数 。 So for a neural network, no of hidden layers to use is a hyper-parameter, while weights and biases in each layer can be called simple parameter.
因此,对于神经网络而言,要使用的任何隐藏层都不是超参数,而每层中的权重和偏差可以称为简单参数。 Of course, you can't make weights of individual layers constant when the number of layers of the network (hyper-parameter) itself is variable .
当然,当网络(超参数)本身的层数可变时,您不能使各个层的权重保持恒定。 Similarly in linear regression, your regularization hyper-parameter directly impacts the weights learned.
同样,在线性回归中,您的正则化超参数会直接影响所学习的权重。
So goal of tuning hyper-parameter is to get a value that leads to best set of those simple parameters . 因此,调整超参数的目标是获得一个值,该值可导致这些简单参数的最佳集合。 Those simple parameters are the ones you actually care about, and they are the ones to be used in final prediction/deployment.
这些简单的参数是您真正关心的参数 ,它们是最终预测/部署中要使用的参数。 So tuning hyper-parameter while keeping them fixed is meaningless.
因此,调整超参数并使其固定不变是没有意义的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.