简体   繁体   English

keras理解Word嵌入层

[英]keras understanding Word Embedding Layer

From the page I got the below code: 页面我得到以下代码:

from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
        'Good work',
        'Great effort',
        'nice work',
        'Excellent!',
        'Weak',
        'Poor effort!',
        'not good',
        'poor work',
        'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
# integer encode the documents
vocab_size = 50
encoded_docs = [one_hot(d, vocab_size) for d in docs]
print(encoded_docs)
# pad documents to a max length of 4 words
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
print(padded_docs)
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))
  1. I looked at encoded_docs and noticed that words done and work both have one_hot encoding of 2, why? 我查看了encoded_docs并发现donework单词都有1_hot编码为2,为什么? Is it because unicity of word to index mapping non-guaranteed. 是因为unicity of word to index mapping non-guaranteed. as per this page ? 按照这个页面
  2. I got embeddings by command embeddings = model.layers[0].get_weights()[0] . 我通过命令embeddings = model.layers[0].get_weights()[0]得到了embeddings in such case why do we get embedding object of size 50? 在这种情况下,为什么我们要embedding大小为50的对象? Even though two words have same one_hot number, do they have different embedding? 即使两个单词具有相同的one_hot数,它们是否有不同的嵌入?
  3. how could i understand which embedding is for which word ie done vs work 我怎么能理解哪个嵌入是针对哪个词,即donework
  4. I also found below code at the page that could help with finding embedding of each word. 我还在页面上找到了下面的代码,可以帮助找到每个单词的嵌入。 But i dont know how to create word_to_index 但我不知道如何创建word_to_index

    word_to_index is a mapping (ie dict) from words to their index, eg love : 69 words_embeddings = {w:embeddings[idx] for w, idx in word_to_index.items()} word_to_index是从单词到其索引的映射(即字典),例如love :69 words_embeddings = {w:embeddings [idx]表示w,idx表示word_to_index.items()}

  5. Please ensure that my understanding of para # is correct. 请确保我对para #理解是正确的。

The first layer has 400 parameters because total word count is 50 and embedding have 8 dimensions so 50*8=400. 第一层有400个参数,因为总字数为50,嵌入有8个尺寸,所以50 * 8 = 400。

The last layer has 33 parameters because each sentence has 4 words max. 最后一层有33个参数,因为每个句子最多有4个单词。 So 4*8 due to dimensions of embedding and 1 for bias. 因此嵌入尺寸为4 * 8,偏置为1。 33 total 共33个

_________________________________________________________________
Layer (type)                 Output Shape              Param#   
=================================================================
embedding_3 (Embedding)      (None, 4, 8)              400       
_________________________________________________________________
flatten_3 (Flatten)          (None, 32)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 33        
=================================================================
  1. Finally, if 1 above is correct, is there a better way to get embedding layer model.add(Embedding(vocab_size, 8, input_length=max_length)) without doing one hot coding encoded_docs = [one_hot(d, vocab_size) for d in docs] 最后,如果上面的1是正确的,有没有更好的方法来获取嵌入层model.add(Embedding(vocab_size, 8, input_length=max_length))而不encoded_docs = [one_hot(d, vocab_size) for d in docs]执行一次热编码encoded_docs = [one_hot(d, vocab_size) for d in docs]

+++++++++++++++++++++++++++++++ update - providing the updated code +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
        'Good work',
        'Great effort',
        'nice work',
        'Excellent!',
        'Weak',
        'Poor effort!',
        'not good',
        'poor work',
        'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])


from keras.preprocessing.text import Tokenizer

tokenizer = Tokenizer()

#this creates the dictionary
#IMPORTANT: MUST HAVE ALL DATA - including Test data
#IMPORTANT2: This method should be called only once!!!
tokenizer.fit_on_texts(docs)

#this transforms the texts in to sequences of indices
encoded_docs2 = tokenizer.texts_to_sequences(docs)

encoded_docs2

max_length = 4
padded_docs2 = pad_sequences(encoded_docs2, maxlen=max_length, padding='post')
max_index = array(padded_docs2).reshape((-1,)).max()



# define the model
model = Sequential()
model.add(Embedding(max_index+1, 8, input_length=max_length))# you cannot use just max_index 
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs2, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs2, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))

embeddings = model.layers[0].get_weights()[0]

embeding_for_word_7 = embeddings[14]
index = tokenizer.texts_to_sequences([['well']])[0][0]
tokenizer.document_count
tokenizer.word_index

1 - Yes, word unicity is not guaranteed, see the docs : 1 - 是的,单词unicity不能保证,请参阅文档

  • From one_hot : This is a wrapper to the hashing_trick function... 来自one_hot :这是hashing_trick函数的包装器...
  • From hashing_trick : "Two or more words may be assigned to the same index, due to possible collisions by the hashing function . The probability of a collision is in relation to the dimension of the hashing space and the number of distinct objects." 来自hashing_trick :“由于散列函数可能发生冲突,可能会将两个或多个单词分配给同一索引。 冲突的概率与散列空间的维度和不同对象的数量有关。”

It would be better to use a Tokenizer for this. 为此使用Tokenizer会更好。 (See question 4) (见问题4)

It's very important to remember that you should involve all words at once when creating indices. 记住在创建索引时应该同时涉及所有单词, 这一点非常重要 You cannot use a function to create a dictionary with 2 words, then again with 2 words, then again.... This will create very wrong dictionaries. 您不能使用函数创建一个包含2个单词的字典,然后再使用2个单词再创建一个字典....这将创建非常错误的字典。


2 - Embeddings have the size 50 x 8 , because that was defined in the embedding layer: 2 - 嵌入的大小为50 x 8 ,因为它是在嵌入层中定义的:

Embedding(vocab_size, 8, input_length=max_length)
  • vocab_size = 50 - this means there are 50 words in the dictionary vocab_size = 50 - 这意味着字典中有50个单词
  • embedding_size= 8 - this is the true size of the embedding: each word is represented by a vector of 8 numbers. embedding_size= 8 - 这是embedding_size= 8的真实大小:每个单词由8个数字的向量表示。

3 - You don't know. 3 - 你不知道。 They use the same embedding. 他们使用相同的嵌入。

The system will use the same embedding (the one for index = 2). 系统将使用相同的嵌入(index = 2)。 This is not healthy for your model at all. 这对你的模型来说根本不健康。 You should use another method for creating indices in question 1. 您应该使用另一种方法来创建问题1中的索引。


4 - You can create a word dictionary manually, or use the Tokenizer class. 4 - 您可以手动创建单词词典,也可以使用Tokenizer类。

Manually : 手动

Make sure you remove punctuation, make all words lower case. 确保删除标点符号,将所有单词设为小写。

Just create a dictionary for each word you have: 只需为您拥有的每个单词创建一个字典:

dictionary = dict()
current_key = 1

for doc in docs:
    for word in doc.split(' '):
        #make sure you remove punctuation (this might be boring)
        word = word.lower()

        if not (word in dictionary):
            dictionary[word] = current_key
            current_key += 1

Tokenizer: 标记生成器:

from keras.preprocessing.text import Tokenizer

tokenizer = Tokenizer()

#this creates the dictionary
#IMPORTANT: MUST HAVE ALL DATA - including Test data
#IMPORTANT2: This method should be called only once!!!
tokenizer.fit_on_texts(docs)

#this transforms the texts in to sequences of indices
encoded_docs2 = tokenizer.texts_to_sequences(docs)

See the output of encoded_docs2 : 查看encoded_docs2的输出:

[[6, 2], [3, 1], [7, 4], [8, 1], [9], [10], [5, 4], [11, 3], [5, 1], [12, 13, 2, 14]]

See the maximum index: 查看最大指数:

padded_docs2 = pad_sequences(encoded_docs2, maxlen=max_length, padding='post')
max_index = array(padded_docs2).reshape((-1,)).max()

So, your vocab_size should be 15 (otherwise you'd have lots of useless - and harmless - embedding rows). 所以,你的vocab_size应该是15(否则你会有很多无用的 - 无害的 - 嵌入行)。 Notice that 0 was not used as an index. 请注意, 0未用作索引。 It will appear in padding!!! 它会出现在填充中!

Do not "fit" the tokenizer again! 不要再“适应”标记器了! Only use texts_to_sequences() or other methods here that are not related to "fitting". 此处仅使用与“拟合”无关的texts_to_sequences()或其他方法。

Hint: it might be useful to include end_of_sentence words in your text sometimes. 提示:有时在文本中包含end_of_sentence字词可能很有用。

Hint2: it is a good idea to save your Tokenizer to be used later (since it has a specific dictoinary for your data, created with fit_on_texts ). 提示2:保存Tokenizer以便以后使用是个好主意(因为它有一个特定的数据,用fit_on_texts创建)。

#save:
text_to_save = tokenizer.to_json()

#load:
from keras.preprocessing.text import tokenizer_from_json
tokenizer = tokenizer_from_json(loaded_text)

5 - Params for embedding are correct. 5 - 嵌入的参数是正确的。

Dense: 稠密:

Params for Dense are always based on the preceding layer (the Flatten in this case). Dense参数总是基于前一层(在这种情况下为Flatten )。

The formula is: previous_output * units + units 公式为: previous_output * units + units

This results in 32 (from the Flatten) * 1 (Dense units) + 1 (Dense bias=units) = 33 这导致32 (from the Flatten) * 1 (Dense units) + 1 (Dense bias=units) = 33

Flatten: 拼合:

It gets all the previous dimensions multiplied = 8 * 4 . 它获得所有先前的维度乘以= 8 * 4
The Embedding outputs lenght = 4 and embedding_size = 8 . Embedding输出lenght = 4embedding_size = 8


6 - The Embedding layer is not dependent of your data and how you preprocess it. 6 - Embedding层不依赖于您的数据以及您如何预处理它。

The Embedding layer has simply the size 50 x 8 because you told so. Embedding层的大小只有50 x 8,因为你这么说。 (See question 2) (见问题2)

There are, of course, better ways of preprocessing the data - See question 4. 当然,有更好的方法来预处理数据 - 见问题4。

This will lead you to select better the vocab_size (which is dictionary size). 这将引导您更好地选择vocab_size (字典大小)。

Seeing the embedding of a word: 看到一个单词的嵌入:

Get the embeddings matrix: 获取嵌入矩阵:

embeddings = model.layers[0].get_weights()[0]

Choose any word index: 选择任何单词索引:

embeding_for_word_7 = embeddings[7]

That's all. 就这样。

If you're using a tokenizer, get the word index with: 如果您正在使用标记化器,请使用以下命令获取单词索引:

index = tokenizer.texts_to_sequences([['word']])[0][0]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM