简体   繁体   English

如何预处理 Tensorflow 2.x 中实现的 BERT model 的数据集?

[英]How to preprocess a dataset for BERT model implemented in Tensorflow 2.x?

Overview概述

I have a dataset made for classification problem.我有一个用于分类问题的数据集。 There are two columns one is sentences and the other is labels (total: 10 labels).有两列一是sentences ,一是labels (总共:10个标签)。 I'm trying to convert this dataset to implement it in a BERT model made for classification and that is implemented in Tensorflow 2.x.我正在尝试转换此数据集以在用于分类的 BERT model 中实现它,并在 Tensorflow 2.x 中实现。 However, I can't preprocess correctly the dataset to make a PrefetchDataset used as input.但是,我无法正确预处理数据集以将PrefetchDataset用作输入。

What I did?我做了什么?

  • Dataframe is balanced and shuffled (every label have 18708 data) Dataframe 平衡洗牌(每个label有18708条数据)
  • Dataframe shape: (187080, 2) Dataframe 形状:(187080, 2)
  • from sklearn.model_selection import train_test_split was used to split the dataframe from sklearn.model_selection import train_test_split用于拆分 dataframe
  • 80% train data, 20% test data 80% 训练数据,20% 测试数据

Training data:训练数据:

X_train X_train

array(['i hate megavideo  stupid time limits',
       'wow this class got wild quick  functions are a butt',
       'got in trouble no cell phone or computer for a you later twitter',
       ...,
       'we lied down around am rose a few hours later party still going lt',
       'i wanna miley cyrus on brazil  i love u my diva miley rocks',
       'i know i hate it i want my dj danger bck'], dtype=object)

y_train y_train

array(['unfriendly', 'unfriendly', 'unfriendly', ..., 'pos_hp',
       'friendly', 'friendly'], dtype=object)

BERT preprocessing Xy_dataset BERT 预处理 Xy_dataset

AUTOTUNE = tf.data.AUTOTUNE # autotune the buffer_size: optional = 1

train_Xy_slices = tf.data.Dataset.from_tensor_slices(tensors=(X_train, y_train))
dataset_train_Xy = train_Xy_slices.batch(batch_size=32)

output output

dataset_train_Xy
<PrefetchDataset shapes: ((None,), (None,)), types: (tf.string, tf.string)>


for i in dataset_train_Xy:
    print(i)
(
<tf.Tensor: shape=(32,), dtype=string, numpy=
array([b'some of us had to work al day',
       ...
       b'feels claudia cazacus free falling feat audrey gallagher amp thomas bronzwaers look ahead are the best trance offerings this summer'], dtype=object)>,
 
<tf.Tensor: shape=(32,), dtype=string, numpy=
array([b'interested', b'uninterested', b'happy', b'friendly', b'neg_hp',
       ...
       b'friendly', b'insecure', b'pos_hp', b'interested', b'happy'],
      dtype=object)>
)

Expected output (example)预期 output(示例)

dataset_train_Xy
<PrefetchDataset shapes: ({input_word_ids: (None, 128), input_mask: (None, 128), input_type_ids: (None, 128)}, (None,)), types: ({input_word_ids: tf.int32, input_mask: tf.int32, input_type_ids: tf.int32}, tf.int64)>

在此处输入图像描述

Observations/problem:观察/问题:

I know I need to tokenize X_train and y_train , but when I tried to tokenize had an error:我知道我需要标记X_trainy_train ,但是当我尝试标记时出现错误:

AUTOTUNE = tf.data.AUTOTUNE # autotune the buffer_size: optional = 1

train_Xy_slices = tf.data.Dataset.from_tensor_slices(tensors=(X_train, y_train))
dataset_train_Xy = train_Xy_slices.batch(batch_size=batch_size) # 32

print(type(dataset_train_Xy))

# Tokenize the text to word pieces.
bert_preprocess = hub.load(tfhub_handle_preprocess)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer')

dataset_train_Xy = dataset_train_Xy.map(lambda ex: (tokenizer(ex), ex[1])) #    print(i[1]) # correspond to labels
dataset_train_Xy = dataset_train_Xy.prefetch(buffer_size=AUTOTUNE)

Traceback追溯

<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-69-8e486f7b671b> in <module>()
     14 tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer')
     15 
---> 16 dataset_train_Xy = dataset_train_Xy.map(lambda ex: (tokenizer(ex), ex[1])) #    print(i[1]) #labels
     17 dataset_train_Xy = dataset_train_Xy.prefetch(buffer_size=AUTOTUNE)

10 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
    668       except Exception as e:  # pylint:disable=broad-except
    669         if hasattr(e, 'ag_error_metadata'):
--> 670           raise e.ag_error_metadata.to_exception(e)
    671         else:
    672           raise

TypeError: in user code:


    TypeError: <lambda>() takes 1 positional argument but 2 were given

Working sample BERT model工作样本 BERT model

#importing neccessary modules
import os
import tensorflow as tf
import tensorflow_hub as hub

data = {'input' :['i hate megavideo  stupid time limits',
       'wow this class got wild quick  functions are a butt',
       'got in trouble no cell phone or computer for a you later twitter',
       'we lied down around am rose a few hours later party still going lt',
       'i wanna miley cyrus on brazil  i love u my diva miley rocks',
       'i know i hate it i want my dj danger bck'],
        'label' : ['unfriendly', 'unfriendly', 'unfriendly', 'unfriendly ',
       'friendly', 'friendly']}
        
import pandas as pd
df = pd.DataFrame(data)

df['category']=df['label'].apply(lambda x: 1 if x=='friendly' else 0)

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(df['input'],df['category'], stratify=df['category'])

bert_preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
bert_encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4")

def get_sentence_embeding(sentences):
    preprocessed_text = bert_preprocess(sentences)
    return bert_encoder(preprocessed_text)['pooled_output']

get_sentence_embeding([
    "we lied down around am rose", 
    "i hate it i want my dj"]
)

#Build model
# Bert layers
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessed_text = bert_preprocess(text_input)
outputs = bert_encoder(preprocessed_text)

# Neural network layers
l = tf.keras.layers.Dropout(0.1, name="dropout")(outputs['pooled_output'])
l = tf.keras.layers.Dense(1, activation='sigmoid', name="output")(l)

# Use inputs and outputs to construct a final model
model = tf.keras.Model(inputs=[text_input], outputs = [l])

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

model.fit(X_train, y_train, epochs=10)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM