简体   繁体   中英

Training a FF Neural Language Model

Consider 3-grams of the sentence "The cat is upstairs" where each word is separated by the rest with @ and ~ symbols.

trigrams = ['@th', 'the', 'he~', '@ca', 'cat', 'at~', '@is', 'is~', 
             '@up', 'ups', 'pst', 'sta', 'tai', 'air', 'irs', 'rs~']

I want to train a character based Feed Forward Neural Language model using this sentence, but I am having trouble fitting the X and y parameters correctly.

My code is as follows:

# trigrams encoded
d = dict([(y,x+1) for x,y in enumerate(sorted(set(trigrams)))])
trigrams_encoded = [d[x] for x in trigrams]
# trigrams_encoded = [3, 15, 8, 1, 7, 6, 2, 10, 4, 16, 11, 13, 14, 5, 9, 12]

# x_train
x_train = [] # list of lists, each list contains 3 encoded trigrams
for i in range(len(trigrams_encoded)-3) :
    lst = trigrams_encoded[i:i+3]
    x_train.append(lst)
x_train = np.array(x_train)            # x_train shape is (13,3)

# y_train
y_train = trigrams_encoded[3:]
data = np.array(y_train)
y_onehot = to_categorical(data)        # y_onehot shape is (13,17)
y_onehot = np.delete(y_onehot, 0, 1)   # now shape is (13,16)

# define model
model = Sequential()
model.add(Embedding(len(d), 10, input_length=3)) #len(d) = 16
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dense(len(d), activation='softmax'))

# compile the model
# i have set sparse_categorical_crossentropy here, but not sure if this is correct. feel free to change it
model.compile(loss="sparse_categorical_crossentropy", optimizer='adam', metrics=['accuracy'])

# train the model
model.fit(x_train, y_onehot, epochs=1, verbose=0)

My initial attempt was to say that since input_length=3, the model will take as input triplets of the listed n-grams which should be labelled as the next n-gram in the list. But this seems to fail. (should it fail?)

The above code raises the following error which I do not know how to solve:

"InvalidArgumentError: Graph execution error:

Detected at node 'sequential/embedding/embedding_lookup' defined at (most recent call last):

(... many lines...)

Node: 'sequential/embedding/embedding_lookup'
indices[5,1] = 16 is not in [0, 16)"

Could you please assist on the correct choices of X and y here?

Your code runs fine when using categorical_crossentropy as your loss function, since you are using one-hot encoded labels:

import numpy as np
import tensorflow as tf

trigrams = ['@th', 'the', 'he~', '@ca', 'cat', 'at~', '@is', 'is~', 
             '@up', 'ups', 'pst', 'sta', 'tai', 'air', 'irs', 'rs~']


# trigrams encoded
d = dict([(y,x+1) for x,y in enumerate(sorted(set(trigrams)))])
trigrams_encoded = [d[x] for x in trigrams]
# trigrams_encoded = [3, 15, 8, 1, 7, 6, 2, 10, 4, 16, 11, 13, 14, 5, 9, 12]

# x_train
x_train = [] # list of lists, each list contains 3 encoded trigrams
for i in range(len(trigrams_encoded)-3) :
    lst = trigrams_encoded[i:i+3]
    x_train.append(lst)
x_train = np.array(x_train)            # x_train shape is (13,3)

# y_train
y_train = trigrams_encoded[3:]
data = np.array(y_train)
y_onehot = tf.keras.utils.to_categorical(data)        # y_onehot shape is (13,17)
y_onehot = np.delete(y_onehot, 0, 1)   # now shape is (13,16)

# define model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(len(d) + 1, 10, input_length=3)) #len(d) = 16
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10, activation='relu'))
model.add(tf.keras.layers.Dense(len(d), activation='softmax'))

model.compile(loss="categorical_crossentropy", optimizer='adam', metrics=['accuracy'])

# train the model
model.fit(x_train, y_onehot, epochs=5, verbose=1)

sparse_categorical_crossentropy only works with sparse integer values.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM