简体   繁体   English

Keras 中的自定义损失 function:在循环中初始化并将值附加到张量

[英]Custom loss function in Keras: Initializing and appending values to a Tensor in loop

I am writing a custom loss function in Keras using Keras and Tensorflow backend functions.我正在使用 Keras 和 ZCB20B802A3F0255E054E4FB8821 后端函数 C 在 Keras 中编写自定义损失 function I want to minimize the mean square error between f(y_true) and f(y_pred), where f(y) is a nonlinear function.我想最小化 f(y_true) 和 f(y_pred) 之间的均方误差,其中 f(y) 是非线性 function。

f(y) = [f1(y) f2(y)... f12(y)], and fk(y) = Xi_k*Theta_k(y), k = 1,2,...,12. f(y) = [f1(y) f2(y)... f12(y)],并且 fk(y) = Xi_k*Theta_k(y),k = 1,2,...,12。 Xi_k and Theta_k(y) are tensors of rank 1. Xi_k 和 Theta_k(y) 是秩为 1 的张量。

Since y_true and y_pred are of size (batchSize x 15), I need to calculate the value of f(y) over a loop for all the samples in the training batch (I believe avoiding the loop is not possible).由于 y_true 和 y_pred 的大小为 (batchSize x 15),我需要在一个循环中为训练批次中的所有样本计算 f(y) 的值(我相信避免循环是不可能的)。 The output of the loop operation would be tensors of size (batchSize x 12)循环操作的 output 的张量大小为 (batchSize x 12)

[[f(y_true[1,:])],[f(y_true[2,:])],...,[f(y_true[batchSize,:])]]

and

[[f(y_pred[1,:])],[f(y_pred[2,:])],...,[f(y_pred[batchSize,:])]]

Generally, when dealing with arrays or matrix, we initialize a matrix of the desired size and assign values to it in the loop or we create an empty matrix and append values to it in the loop.通常,在处理 arrays 或矩阵时,我们会初始化所需大小的矩阵并在循环中为其分配值,或者我们在循环中创建一个空矩阵和 append 值给它。 But how do we do the same with tensors?但是我们如何对张量做同样的事情呢?

Below is a simplified form of the custom loss function (just calculating f1(y_true) and f1(y_pred)).下面是自定义损失 function 的简化形式(仅计算 f1(y_true) 和 f1(y_pred))。 The initialization and appending function don't work as they are not tf/Keras operations, what should I use in place of these to make it work with tensor?初始化和附加 function 不起作用,因为它们不是 tf/Keras 操作,我应该使用什么来代替这些以使其与张量一起使用?

matd = spio.loadmat('LPVmodel_iVarDeg.mat', squeeze_me=True) 
mate = spio.loadmat('PNLSS_model_modified.mat', squeeze_me=True)

def custom_loss_fn(y_true, y_pred):
    iVarDeg1 = tf.convert_to_tensor(matd['iVarDeg1'], dtype=tf.float32) # (XiSize(1) x 15)
    Xi1 = tf.convert_to_tensor(mate['Xim1'], dtype=tf.float32) # (XiSize(1) x 1) 

    batchSize = m 
    fy_true = [] # initialization
    fy_pred = [] # initialization

    for i in range(batchSize):
        yin = y_true[i,:]  # (1 x 15) network output
        tin = y_pred[i,:]  # (1 x 15) target
    
        ypowerD = tf.math.pow(yin,iVarDeg1) # element wise power (XiSize(1)x15)
        monomial = tf.math.reduce_prod(ypowerD) # column wise product of elements (XiSize(1)x1)
        Theta1 = monomial  # nonlinear basis for state eq 1 (XiSize(1)x1)
    
        ypowerD = tf.math.pow(tin,iVarDeg1)
        monomial = tf.math.reduce_prod(ypowerD)
        Gamma1 = monomial

        temp = tf.math.reduce_sum(tf.math.multiply(Xi1,Theta1)) # sum(element wise prod) 
        fy_true.append(temp)
    
        temp = tf.math.reduce_sum(tf.math.multiply(Xi1,Gamma1))
        fy_pred.append(temp)
    

    return Kb.mean(Kb.sum(Kb.square(fy_pred - fy_true))

in graph mode,If you want to fill a tensor in a loop like a list,you can use TensorArray:在图形模式下,如果要像列表一样在循环中填充张量,可以使用 TensorArray:

to 'initialise' eg: '初始化' 例如:

ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)

to 'append': '附加':

ta = ta.write(1, 20)

to transform TensorArray to tensor:将 TensorArray 转换为张量:

ta.stack()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM