簡體   English   中英

使用預訓練的 BERT 模型進行錯誤多類文本分類

[英]Error multiclass text classification with pre-trained BERT model

我正在嘗試使用 Google 的 BERT 預訓練模型對 34 個互斥類中的文本進行分類。 在准備好 BERT 期望作為輸入的“train”、“dev”和“test”TSV 文件后,我嘗試在我的 Colab (Jupyter) Notebook 中執行以下命令

!python bert/run_classifier.py 
--task_name=cola
--do_train=true 
--do_eval=true 
--data_dir=./Bert_Input_Folder 
--vocab_file=./uncased_L-24_H-1024_A-16/vocab.txt 
--bert_config_file=./uncased_L-24_H-1024_A-16/bert_config.json 
--init_checkpoint=./uncased_L-24_H-1024_A-16/bert_model.ckpt 
--max_seq_length=512 
--train_batch_size=32 
--learning_rate=2e-5 
--num_train_epochs=3.0 
--output_dir=./Bert_Output_Folder

我收到以下錯誤

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder..model_fn at 0x7f4b945a01e0>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': './Bert_Output_Folder', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4b94f366a0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Writing example 0 of 23834
Traceback (most recent call last):
File "bert/run_classifier.py", line 981, in 
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "bert/run_classifier.py", line 870, in main
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
File "bert/run_classifier.py", line 490, in file_based_convert_examples_to_features
max_seq_length, tokenizer)
File "bert/run_classifier.py", line 459, in convert_single_example
label_id = label_map[example.label]
KeyError: '33'

在“run_classifier.py”腳本中,我修改了最初為二進制分類任務編寫的“get_labels()”函數,以返回我所有的 34 個類

def get_labels(self): 
"""See base class.""" 
return ["0", "1", "2", ..., "33"]

知道出了什么問題,或者我是否缺少其他必要的修改?

謝謝!

只需在 get_label 函數中將['0', '1', '2', ... '33']替換為[str(x) for x in range(34)]即可解決(兩者實際上是等效的,但是由於某種未知原因,這解決了問題)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM