简体   繁体   English

“输入无效。 应该是字符串、字符串列表/元组或整数列表/元组。” ValueError:输入无效

[英]“Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.” ValueError: Input is not valid

I am using Bert tokenizer for french and I am getting this error but I do not seems to solutionated it.我正在为法语使用 Bert 标记器,我收到了这个错误,但我似乎没有解决它。 If you have a suggestion.如果你有建议。


Traceback (most recent call last):
  File "training_cross_data_2.py", line 240, in <module>
    training_data(f, root, testdir, dict_unc)
  File "training_cross_data_2.py", line 107, in training_data
    Xtrain_emb, mdlname = get_flaubert_layer(data)
  File "training_cross_data_2.py", line 40, in get_flaubert_layer
    tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
  File "/home/getalp/kelodjoe/anaconda3/envs/env/lib/python3.6/site-packages/pandas/core/series.py", line 3848, in apply
    mapped = lib.map_infer(values, f, convert=convert_dtype)
  File "pandas/_libs/lib.pyx", line 2329, in pandas._libs.lib.map_infer
  File "training_cross_data_2.py", line 40, in <lambda>
    tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
  File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 907, in encode
    **kwargs,
  File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1021, in encode_plus
    first_ids = get_input_ids(text)
  File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1003, in get_input_ids
    "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.

I look around to fond answer but whaever is proposed do not seems to work.我环顾四周寻找喜欢的答案,但无论提出什么建议似乎都不起作用。 Texte is dataframe.文本是 dataframe。

here the code:这里的代码:

def get_flaubert_layer(texte): # teste is dataframe which I take from an excel file
    
    language_model_dir= os.path.expanduser(args.language_model_dir)
    lge_size = language_model_dir[16:-1]   # modify when on jean zay 27:-1
    print(lge_size)
    flaubert = FlaubertModel.from_pretrained(language_model_dir)
    flaubert_tokenizer = FlaubertTokenizer.from_pretrained(language_model_dir)
    tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
    max_len = 0
    for i in tokenized.values:
        if len(i) > max_len:
            max_len = len(i)
    padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
    attention_mask = np.where(padded != 0, 1, 0)

I have another file of the same structure but it is working but for this case I do not know why I get this error should I redownload the model?我有另一个相同结构的文件,但它正在工作,但对于这种情况,我不知道为什么我会收到这个错误,我应该重新下载 model?

the file kook like this:文件 kook 是这样的:

enter image description here在此处输入图像描述

You may want to change this line:您可能想要更改此行:

tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))

to

tokenized = flaubert_tokenizer.encode(texte["verbatim"], 
    add_special_tokens=True, 
    max_length=512, 
    truncation=True)`

This has two advantages:这有两个优点:

  1. You don't pass a pandas row to tokenize function (which I'm guessing is what was causing your error).您没有通过 pandas 行来标记 function (我猜这是导致您的错误的原因)。
  2. You're not calling the encode function once per row.您不会每行调用一次encode function 。 Which will probably speed up tokenization.这可能会加速标记化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM