简体   繁体   中英

Custom loss function in Keras: Initializing and appending values to a Tensor in loop

I am writing a custom loss function in Keras using Keras and Tensorflow backend functions. I want to minimize the mean square error between f(y_true) and f(y_pred), where f(y) is a nonlinear function.

f(y) = [f1(y) f2(y)... f12(y)], and fk(y) = Xi_k*Theta_k(y), k = 1,2,...,12. Xi_k and Theta_k(y) are tensors of rank 1.

Since y_true and y_pred are of size (batchSize x 15), I need to calculate the value of f(y) over a loop for all the samples in the training batch (I believe avoiding the loop is not possible). The output of the loop operation would be tensors of size (batchSize x 12)

[[f(y_true[1,:])],[f(y_true[2,:])],...,[f(y_true[batchSize,:])]]

and

[[f(y_pred[1,:])],[f(y_pred[2,:])],...,[f(y_pred[batchSize,:])]]

Generally, when dealing with arrays or matrix, we initialize a matrix of the desired size and assign values to it in the loop or we create an empty matrix and append values to it in the loop. But how do we do the same with tensors?

Below is a simplified form of the custom loss function (just calculating f1(y_true) and f1(y_pred)). The initialization and appending function don't work as they are not tf/Keras operations, what should I use in place of these to make it work with tensor?

matd = spio.loadmat('LPVmodel_iVarDeg.mat', squeeze_me=True) 
mate = spio.loadmat('PNLSS_model_modified.mat', squeeze_me=True)

def custom_loss_fn(y_true, y_pred):
    iVarDeg1 = tf.convert_to_tensor(matd['iVarDeg1'], dtype=tf.float32) # (XiSize(1) x 15)
    Xi1 = tf.convert_to_tensor(mate['Xim1'], dtype=tf.float32) # (XiSize(1) x 1) 

    batchSize = m 
    fy_true = [] # initialization
    fy_pred = [] # initialization

    for i in range(batchSize):
        yin = y_true[i,:]  # (1 x 15) network output
        tin = y_pred[i,:]  # (1 x 15) target
    
        ypowerD = tf.math.pow(yin,iVarDeg1) # element wise power (XiSize(1)x15)
        monomial = tf.math.reduce_prod(ypowerD) # column wise product of elements (XiSize(1)x1)
        Theta1 = monomial  # nonlinear basis for state eq 1 (XiSize(1)x1)
    
        ypowerD = tf.math.pow(tin,iVarDeg1)
        monomial = tf.math.reduce_prod(ypowerD)
        Gamma1 = monomial

        temp = tf.math.reduce_sum(tf.math.multiply(Xi1,Theta1)) # sum(element wise prod) 
        fy_true.append(temp)
    
        temp = tf.math.reduce_sum(tf.math.multiply(Xi1,Gamma1))
        fy_pred.append(temp)
    

    return Kb.mean(Kb.sum(Kb.square(fy_pred - fy_true))

in graph mode,If you want to fill a tensor in a loop like a list,you can use TensorArray:

to 'initialise' eg:

ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)

to 'append':

ta = ta.write(1, 20)

to transform TensorArray to tensor:

ta.stack()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM