简体   繁体   中英

Tensorflow:How to add regularization in the model

I want to add regularization into my optimizer like this:

tf.train.AdadeltaOptimizer(learning_rate=1).minimize(loss)

But I don't know how to design the function "loss" into the code below

The website I saw is: https://blog.csdn.net/marsjhao/article/details/72630147

The modified code originally came from the Google machine Learning course: https://colab.research.google.com/notebooks/mlcc/improving_neural_net_performance.ipynb?utm_source=mlcc&utm_campaign=colab-external&utm_medium=referral&utm_content=improvingneuralnets-colab&hl=zh-tw#scrollTo=P8BLQ7T71JWd

Can someone give me some advice or discuss with me?


def train_nn_classifier_model_new(
    my_optimizer,
    steps,
    batch_size,
    hidden_units,
    training_examples,
    training_targets,
    validation_examples,
    validation_targets):

  periods = 10
  steps_per_period = steps / periods

  # Create a DNNClassifier object.

  my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
  dnn_classifier = tf.estimator.DNNClassifier(
      feature_columns=construct_feature_columns(training_examples),
      hidden_units=hidden_units,
      optimizer=my_optimizer
      )

  # Create input functions.
  training_input_fn = lambda: my_input_fn(training_examples, 
                                          training_targets["deal_or_not"], 
                                          batch_size=batch_size)
  predict_training_input_fn = lambda: my_input_fn(training_examples,        
                                         training_targets["deal_or_not"], 
                                         num_epochs=1, 
                                         shuffle=False)
  predict_validation_input_fn = lambda: my_input_fn(validation_examples, 
                                         validation_targets["deal_or_not"], 
                                         num_epochs=1, 
                                         shuffle=False)
  # Train the model, but do so inside a loop so that we can periodically assess
  # loss metrics.
  print("Training model...")
  print("LogLoss (on training data):")
  training_log_losses = []
  validation_log_losses = []
  for period in range (0, periods):
    # Train the model, starting from the prior state.
    dnn_classifier.train(
        input_fn=training_input_fn,
        steps=steps_per_period
    )
    # Take a break and compute predictions.    
    training_probabilities = 
    dnn_classifier.predict(input_fn=predict_training_input_fn)
    training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
    print(training_probabilities)

    validation_probabilities = dnn_classifier.predict(input_fn=predict_validation_input_fn)
    validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])

    training_log_loss = metrics.log_loss(training_targets, training_probabilities)
    validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
    # Occasionally print the current loss.
    print("  period %02d : %0.2f" % (period, training_log_loss))
    # Add the loss metrics from this period to our list.
    training_log_losses.append(training_log_loss)
    validation_log_losses.append(validation_log_loss)
  print("Model training finished.")

  # Output a graph of loss metrics over periods.
  plt.ylabel("LogLoss")
  plt.xlabel("Periods")
  plt.title("LogLoss vs. Periods")
  plt.tight_layout()
  plt.plot(training_log_losses, label="training")
  plt.plot(validation_log_losses, label="validation")
  plt.legend()

  return dnn_classifier




result = train_nn_classifier_model_new(
    my_optimizer=tf.train.AdadeltaOptimizer (learning_rate=1),
    steps=30000,
    batch_size=250,
    hidden_units=[150, 150, 150, 150],
    training_examples=training_examples,
    training_targets=training_targets,
    validation_examples=validation_examples,
    validation_targets=validation_targets
    )

Regularization are added to loss function. Your Optimizer AdadeltaOptimizer do not support regularization parameter. If you want to add regularization to your optimizer you should use tf.train.ProximalAdagradOptimizer as it has l2_regularization_strength and l1_regularization_strength parameters where you can set the values.These parameters were part of the original algorithm.

Other wise you simply have to apply regularization to your custom loss function but DNNClassifier does not allow to use any custom loss function.You have to create your network manually for that. How to add regularization ,check it here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM