简体   繁体   中英

applying function to all columns in an numpy array

I am extremely new to numpy.

Just wondering why does this not work.

print items['description'] 

Yields

0                            Продам Камаз 6520 20 тонн
1                                      Весь в тюнинге.
2    Телефон в хорошем состоянии, трещин и сколов н...
3    Отличный подарок на новый год от "китайской ap...
4        Лыжные ботинки в хорошем состоянии, 34 размер
Name: description, dtype: object

Trying to apply this method to all the rows in this col.

items['description'] = vectorize_sentence(items['description'].astype(str))

This is the function definition for vectorize sentence.

def vectorize_sentence(self, sentence):
    # Tokenize 
    print 'sentence', sentence

    tkns = self._tokenize(sentence)
    vec = None
    for tkn in tkns: 
        print 'tkn', tkn.decode('utf-8')
        print type(tkn)
        if self.model[tkn.decode('utf-8')]:
            vec = sum(vec, self.model[tkn.decode('utf-8')])
    #vec = sum([self.model[x] for x in tkns if x in self.model])
    #print vec
def _tokenize(self, sentence):
    return sentence.split(' ')

Error Message:

 AttributeError: 'Series' object has no attribute 'split' 

Your getting that error because 'Series' object has no attribute 'split' . Mainly, the .astype(str) does not return a single long string like you think it does

items = pd.DataFrame({'description': ['bob loblaw', 'john wayne', 'lady gaga loves to sing']})
sentence = items['description'].astype(str)
sentence.split(' ')

now try

sentence = ' '.join(x for x in items['description'])
sentence.split(' ')

and then implementing in your function

def _tokenize(self, sentence):
    return ' '.join(x for x in items['description']).split(' ')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM