简体   繁体   中英

How tensorflow graph regularization (NSL) affects triplet semihard loss (TFA)

I want to train a binary target deep neural network model using nsl.keras.GraphRegularization as described in this tutorial . My model has a triplet semihard loss in an intermediate dense layer which should not be "graph regularized".

From the nsl.keras.GraphRegularization definition on Github:

Incorporates graph regularization into the loss of base_model .

Graph regularization is done on the logits layer and only during training.

It means that the intermediate triplet semihard loss will not be affected by this regularization?

Yes, that's right. Graph regularization will only be applied on the outputs of base_model . If your base_model uses triplet semihard loss in another layer, that loss should remain unaffected and preserved. If that's the not the case, please file a bug athttps://github.com/tensorflow/neural-structured-learning/issues .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM