简体   繁体   中英

Problem using a dictionary of numpy array(Indexing it wrong)

I'm trying to code the Gaussian Naive Bayes from scratch using python and numpy but I'm having some troubles to create the word frequency table.

I have a dictionary of N words as keys and each one of these N words has a numpy array associated.

Example:

freq_table['subject'] -> Vector of ocurrences of this word of length nrows where nrows is the size of the dataset.

So for each row in the dataset I'm doing: freq_table[WORD][i] += 1

def train(self, X):
        # Creating the dictionary
        self.dictionary(X.data[:100])

        # Calculating the class prior probabilities
        self.p_class = self.prior_probs(X.target)

        # Calculating the likelihoods
        nrows = len(X.data[:100])
        freq = dict.fromkeys(self._dict, nrows * [0])

        for doc, target, i in zip(X.data[:2], X.target[:2], range(2)):
            print('doc [%d] out of %d' % (i, nrows))

            words = preprocess(doc)

            print(len(words), i)

            for j, w in enumerate(words):
                print(w, j)

                # Getting the vector assigned by the word w
                vec = freq[w]

                # In the ith position (observation id) sum one of ocurrence
                vec[i] += 1

        print(freq['subject'])

The output is

Dictionary length 4606

doc [0] out of 100
43 0
wheres 0
thing 1
subject 2
nntppostinghost 3
racwamumdedu 4
organization 5
university 6
maryland 7
college 8
lines 9
wondering 10
anyone 11
could 12
enlighten 13
sports 14
looked 15
early 16
called 17
bricklin 18
doors 19
really 20
small 21
addition 22
front 23
bumper 24
separate 25
anyone 26
tellme 27
model 28
engine 29
specs 30
years 31
production 32
history 33
whatever 34
funky 35
looking 36
please 37
email 38
thanks 39
brought 40
neighborhood 41
lerxst 42
[43, 53, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

It seems that I'm indexing the dictionary and vector wrong.

It was not supposed to be 43 or 53 occurrences for the word 'subject' because the length of the preprocessed words from the document/row is 43/53.

The code has at least two errors:

1) In the line

freq = dict.fromkeys(self._dict, nrows * [0])

You initialize all items in the freq dictionary with the same list. nrows * [0] is evaluated once to create a list, which is then passed to the dict.fromkeys() function. The reference to this one list is assigned to all of the keys in the freq dictionary. No matter which key you select, you get a reference to the same list. This is a common gotcha in Python.

Instead, you can use a dictionary comprehension to create the entries with separate lists:

freq = {key:nrows*[0] for key in self._dict}

2) You use i as your indexing variable for the vec , but you meant to use j :

vec[j] += 1

Using variables with descriptive names would help avoid this type of confusion.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM