[英]Creating Pairs for Training a Siamese Network for Speaker Verification Text Dependent
[英]Preprocessing text for siamese network
我想創建一個連體網絡來比較兩個字符串的相似性。
我正在嘗試按照本教程進行操作。 這個例子適用於圖像,但我想使用字符串表示(在字符級別)並且我堅持文本預處理。
假設我有兩個輸入:
string_a = ["one","two","three"]
string_b = ["four","five","six"]
我需要准備它以輸入我的模型。 為此,我需要:
所以我正在嘗試以下操作:
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
#create a tokenizer
tok = Tokenizer(char_level=True,oov_token="?")
tok.fit_on_texts(string_a+string_b)
char_index = tok.word_index
maxlen = max([len(x) for x in tok.texts_to_sequences(string_a+string_b)])
#create a dataframe
dataset_a = tf.data.Dataset.from_tensor_slices(string_a)
dataset_b = tf.data.Dataset.from_tensor_slices(string_b)
dataset = tf.data.Dataset.zip((dataset_a,dataset_b))
# preprocessing functions
def tokenize_string(data,tokenizer,max_len):
"""vectorize string with a given tokenizer
"""
sequence = tokenizer.texts_to_sequences(data)
return_seq = pad_sequences(sequence,maxlen=max_len,padding="post",truncating="post")
return return_seq[0]
def preprocess_couple(string_1,string_2):
"""given 2 strings, tokenize them and return an array
"""
return (
tokenize_string([string_1], tok, maxlen),
tokenize_string([string_2], tok, maxlen)
)
#shuffle and preprocess dataset
dataset = dataset.shuffle(buffer_size=2)
dataset = dataset.map(preprocess_couple)
但是我收到一個錯誤:
AttributeError: in user code:
<ipython-input-29-b920d389ea82>:29 preprocess_couple *
tokenize_string([string_2], tok, maxlen)
<ipython-input-29-b920d389ea82>:20 tokenize_string *
sequence = tokenizer.texts_to_sequences(data)
C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\keras_preprocessing\text.py:281 texts_to_sequences *
return list(self.texts_to_sequences_generator(texts))
C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\keras_preprocessing\text.py:306 texts_to_sequences_generator **
text = text.lower()
C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\embargo_text\lib\site-packages\tensorflow\python\framework\ops.py:401 __getattr__
self.__getattribute__(name)
應用 preprocess_couple 函數之前的數據集狀態如下:
(<tf.Tensor: shape=(), dtype=string, numpy=b'two'>, <tf.Tensor: shape=(), dtype=string, numpy=b'five'>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'three'>, <tf.Tensor: shape=(), dtype=string, numpy=b'six'>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'one'>, <tf.Tensor: shape=(), dtype=string, numpy=b'four'>)
我認為這個錯誤來自這樣一個事實,即字符串被函數 from_tensor_slices 轉換為張量。 但是為輸入預處理這些數據的正確方法是什么?
我沒有得到您真正想要實現的目標,但是如果想將您的文本轉換為向量,這將有所幫助
def process(data):
tok = Tokenizer(char_level=True,oov_token="?")
tok.fit_on_texts(data)
maxlen = max([len(x) for x in tok.texts_to_sequences(data)])
data=tok.texts_to_sequences(data)
data=pad_sequences(data,maxlen=maxlen,padding='post')
return data
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.