简体   繁体   English

AttributeError: 'WordList' object 没有属性 'split'

[英]AttributeError: 'WordList' object has no attribute 'split'

I am trying to apply Lemmatization after I tokenized my "script" column.在对“脚本”列进行标记后,我正在尝试应用 Lemmatization。 But I get an AttributeError.但我得到一个属性错误。 I tried different thins我尝试了不同的薄

Here is my "script" column:这是我的“脚本”专栏:

df_toklem["script"][0:5]
---------------------------------------------------------------------------
type(df_toklem["script"])

Output: Output:

id
1    [ext, street, day, ups, man, big, pot, belly, ...
2    [credits, still, life, tableaus, lawford, n, h...
3    [fade, ext, convent, day, whispering, nuns, pr...
4    [fade, int, c, hercules, turbo, prop, night, e...
5    [open, theme, jaws, plane, busts, clouds, like...
Name: script, dtype: object
---------------------------------------------------------------------------
pandas.core.series.Series

And the code where I try to apply Lemmatization:以及我尝试应用 Lemmatization 的代码:

from textblob import Word
nltk.download("wordnet")
df_toklem["script"].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))

ERROR:错误:

[nltk_data] Downloading package wordnet to
[nltk_data]     C:\Users\PC\AppData\Roaming\nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-72-dbc80c619ec5> in <module>
      1 from textblob import Word
      2 nltk.download("wordnet")
----> 3 df_toklem["script"].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))

~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
   4198             else:
   4199                 values = self.astype(object)._values
-> 4200                 mapped = lib.map_infer(values, f, convert=convert_dtype)
   4201 
   4202         if len(mapped) and isinstance(mapped[0], Series):

pandas\_libs\lib.pyx in pandas._libs.lib.map_infer()

<ipython-input-72-dbc80c619ec5> in <lambda>(x)
      1 from textblob import Word
      2 nltk.download("wordnet")
----> 3 df_toklem["script"].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))

AttributeError: 'WordList' object has no attribute 'split'

I tried different things but unfortunately couldn't find an efficient solution.我尝试了不同的方法,但不幸的是找不到有效的解决方案。 Thank you for your time.感谢您的时间。

What you are trying to do won't work because you are applying a string function (split) to a Word List.您尝试执行的操作不起作用,因为您将字符串 function (拆分)应用于单词列表。 I would try to use nltk , instead, and create a new pandas column with my tokenized data:我会尝试使用nltk ,并使用我的标记化数据创建一个新的 pandas 列:

import nltk
df_toklem['tokenized'] = df_toklem.apply(lambda row: nltk.word_tokenize(row['script']))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM