简体   繁体   中英

comparing the output of Automl

im getting different output for feature importance, when I run the automl in azure, google and h2o. even though the data is same and all the features are also same. what would be the reason for it. is there any other method to compare the models

This is expected behavior, H2OAutoML is not reproducible by default. To make H2OAutoML reproducible you need to set max_models , seed and exclude DeepLearning ( exclude_algos=["DeepLearning"] ) and make sure max_runtime_secs is not set.

To compare models you can use model explanations or you can just compare the model metrics .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM