简体   繁体   English

Neural.network 超参数优化和灵敏度分析

[英]Neural network Hyper-parameters Optimization and Sensitivity Analysis

I am working on very large dataset in Keras with a single-output neural.network.我正在使用单输出 neural.network 处理 Keras 中的非常大的数据集。 Upon a change in depth of the.network, I observed some improvements in the performance of the model. Therefore, I wanted to perform ""A systematic"" research-wise hyper-parameter optimization now (hidden layers, activation functions, # neurons, epochs, batch size, etc.).随着网络深度的变化,我观察到 model 的性能有所改善。因此,我现在想执行“系统”研究方面的超参数优化(隐藏层、激活函数、# 神经元、时期、批量大小等)。 However, I was told that GridSearchCV and RandomSearchCV are not proper options since my dataset is large.但是,我被告知 GridSearchCV 和 RandomSearchCV 不是合适的选择,因为我的数据集很大。 I was wondering if any of you have experience in this regard or have feedback which may direct me to the right path.我想知道你们中是否有人有这方面的经验或反馈可以指导我走上正确的道路。

use a confusion matrix and heat map to measure performance accuracy of your.network使用混淆矩阵和热 map 来衡量你的网络的性能准确性

Y_pred=model.predict(X_test)
Y_pred2=np.argmax(Y_pred, axis=1)
Y_test2=np.argmax(Y_test, axis=1)
cm = confusion_matrix(Y_test2, Y_pred2)
sns.heatmap(cm)
plt.show()

print(classification_report(Y_test2, Y_pred2,target_names=label_names))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM