简体   繁体   中英

Writing this exotic NN architecture with keras, tensorflow and python

I'm trying to get Keras to train a multiclass classification model that can be written in a network like this:

在此输入图像描述 The only set of trainable parameters are those 在此输入图像描述 , all the rest is given. The functions fi are combinations of usual mathematical functions (for example 在此输入图像描述 .Sigma stands for summing the previous terms and softmax is the usual function. The (x1,x2,...xn) are elements of train or test set and 分子 are a specific subset of the original data already selected.

The model in more depth:

Specificaly, given (x_1,x_2,...,x_n) an input in train or test set, the network evaluates

在此输入图像描述

where fi are given mathematical functions, 分子 are rows of a particular subset of the original data and the coefficients 在此输入图像描述 are the parameters I want to train. As I'm using keras, I expect it to add a bias term to each row.

After the above evaluation, I will apply a softmax layer (each of the m lines above are numbers that will be inputs for the softmax function).

At the end I want to compile the model and run model.fit as usual.

The problem is that I couln't translate the expression to keras sintax.

My attempt:

Following the network scratch above, I first tried to consider each of the expressions of the form 行 as lambda layers in a Sequential Model, but the best I could get to work was a combination of a dense layer with linear activation (which would play the role of a row's parameters: 参数 ) followed by a Lambda layer outputting a vector 向量 without the required summation, as follows:

model = Sequential()
#single row considered:
model.add(Lambda(lambda x:  f_fixedRow(x), input_shape=(nFeatures,))) 
#parameters set after lambda layer to get (a1*f(x1,y1),...,an*f(xn,yn)) and not (f(a1*x1,y1),...,f(an*xn,yn))
model.add(Dense(nFeatures, activation='linear')) 

#missing summation: sum(x)
#missing evaluation of f in all other rows

model.add(Dense(classes,activation='softmax',trainable=False)) #should get all rows
model.compile(optimizer='sgd',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

Also, I had to define the function in the lambda function call with the 分子 argument already fixed (because the lambda function could have only the input layers as variable):

def f_fixedRow(x):
   #picking a particular row (as a vector) to evaluate f in (f works element-wise)
   y=tf.constant(value=x[0,:],dtype=tf.float32) 
   return f(x,y)

I managed to write the f function with tensorflow (working element-wise in a row), although this is a possible source for problems in my code (and the above workaround seems unnatural).

I also thought that if I could properly write the element-wise sum of the vector in the aforementioned attempt I could repeat the same procedure in a parallelized manner with the keras Functional API and then insert the output of each parallel model in a softmax function, as I need.

Another approach that I considered was to train the parameters keeping their natural matrix structure seen in Network Description , maybe writing a matrix Lambda layer, but I could not find anything related to this idea.

Anyway, I'm not sure what is a good way to work with this model within keras, maybe I'm missing an important point because of the non standard way the parameters are written or lack of experience with tensorflow. Any suggestions are welcome.

For this answer, it's important that f be a tensor function that operates elementwise. (No iterating). This is reasonably easy to have, just check the keras backend functions .

Assumptions:

  • The x_pk set is constant, otherwise this solution must be reviewed.
  • The function f is elementwise (if not, please show f for better code)

Your model will need x_pk as a tensor input . And you should do that in a functional API model.

import keras.backend as K
from keras.layers import Input, Lambda, Activation
from keras.models import Model

#x_pk data
x_pk_numpy = select_X_pk_samples(x_train)
x_pk_tensor = K.variable(x_pk_numpy)

#number of rows in x_pk
m = len(x_pk_numpy)

#I suggest a fixed batch size for simplicity
batch = some_batch_size

First let's work on the function that will take x and x_pk calling f .

def calculate_f(inputs): #inputs will be a list with x and x_pk
    x, x_pk = inputs

    #since f will work elementwise, let's replicate x and x_pk so they have equal shapes 
    #please explain f for better optimization

    # x from (batch, n) to (batch, m, n)
    x = K.stack([x]*m, axis=1)

    # x_pk from (m, n) to (batch, m, n)
    x_pk = K.stack([x_pk]*batch, axis=0)
        #a batch size of 1 could make this even simpler    
        #a variable batch size would make this more complicated
        #certain f functions could make this process unnecessary    

    return f(x, x_pk)

Now, different from a Dense layer, this formula is using the a_pk weights multiplied elementwise. So we need a custom layer:

class ElementwiseWeights(Layer):
    def __init__(self, **kwargs):
        super(ElementwiseWeights, self).__init__(**kwargs)

    def build(self, input_shape):
        weight_shape = (1,) + input_shape[1:] #shape (1, m, n)

        self.kernel = self.add_weight(name='kernel', 
                                  shape=weight_shape,
                                  initializer='uniform',
                                  trainable=True)

        super(ElementwiseWeights, self).build(input_shape)  

    def compute_output_shape(self,input_shape):
        return input_shape

    def call(self, inputs):
        return self.kernel * inputs

Now let's build our functional API model:

#x_pk model tensor input
x_pk = Input(tensor=x_pk_tensor) #shape (m, n)

#x usual input with fixed batch size
x = Input(batch_shape=(batch,n))  #shape (batch, n)

#calculate F
out = Lambda(calculate_f)([x, xp_k]) #shape (batch, m, n)

#multiply a_pk
out = ElementwiseWeights()(out) #shape (batch, m, n)

#sum n elements, keep m rows:
out = Lambda(lambda x: K.sum(x, axis=-1))(out) #shape (batch, m)

#softmax
out = Activation('softmax')(out) #shape (batch,m)

Continue this model with whatever you want and finish it:

model = Model([x, x_pk], out)
model.compile(.....)
model.fit(x_train, y_train, ....) #perhaps you might need .fit([x_train], ytrain,...)

Edit for function f

You can have the proposed f like this:

#create the n coefficients:
coefficients = np.array([c0, c1, .... , cn])
coefficients = coefficients.reshape((1,1,n))

def f(x, x_pk):

    c = K.variable(coefficients) #shape (1, 1, n)
    out = (x - x_pk) / c
    return K.exp(out)
  • This f would accept x with shape (batch, 1, n) , without the stack used in the calculate_f function.
  • Or could accept x_pk with shape (1, m, n) , allowing variable batch size.

But I'm not sure it's possible to have both of these shapes together. Testing this might be interesting.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM