简体   繁体   中英

How can I test a word2vec over development data?

In a computer assignment, it's requested to implement word2vec algorithm to generate dense vectors for some words using a neural network. I implemented the neural network and trained it over training data. First, how can I test it over the test data? The question asks to draw a plot showing the perplexity for the training and test data during training (epochs). I can do that for the loss, which is something like this:

EPOCH: 0 LOSS: 27030.09155006593
EPOCH: 0 P_LOSS: 24637.964948774144
EPOCH: 0 PP: inf
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:121: RuntimeWarning: overflow encountered in double_scalars
EPOCH: 1 LOSS: 25349.086587261085
EPOCH: 1 P_LOSS: 22956.95998596929
EPOCH: 1 PP: inf
EPOCH: 2 LOSS: 24245.455581381622
EPOCH: 2 P_LOSS: 21853.32898008983
EPOCH: 2 PP: inf
EPOCH: 3 LOSS: 23312.976009712416
EPOCH: 3 P_LOSS: 20920.849408420647

Which I gained by the following code:

    # CYCLE THROUGH EACH EPOCH
    for i in range(0, self.epochs):

        self.loss = 0
        self.loss_prob = 0
        # CYCLE THROUGH EACH TRAINING SAMPLE
        for w_t, w_c in training_data:

            # FORWARD PASS
            y_pred, h, u = self.forward_pass(w_t)

            # CALCULATE ERROR
            EI = np.sum([np.subtract(y_pred, word) for word in w_c], axis=0)

            # BACKPROPAGATION
            self.backprop(EI, h, w_t)

            # CALCULATE LOSS
            self.loss += -np.sum([u[word.index(1)] for word in w_c]) + len(w_c) * np.log(np.sum(np.exp(u)))
            self.loss_prob += -2*np.log(len(w_c)) -np.sum([u[word.index(1)] for word in w_c]) + (len(w_c) * np.log(np.sum(np.exp(u))))

        print('EPOCH:',i, 'LOSS:', self.loss)
        print('EPOCH:',i, 'P_LOSS:', self.loss_prob)
        print('EPOCH:',i, 'PP:', 2**self.loss_prob)

However, I don't know how to find the perplexity for the training and development data in each epoch. Based on this question , it's said perplexity is 2**loss . However, as I tried this formula, I gained INF . Could you guide me how can I calculate perplexity? Can I do it in the current code, or should I apply a function over whole development data?

Indeed, 2**20000 overflows the float. You should normalize the loss per example, ie, divide by your training data size. Back-propagation works even if you sum them because dividing by a constant does not change the derivative, but in this way, the loss is not invariant to data size.

You can solve the INF problem using decimal.

import decimal
decimal.Decimal(2)**decimal.Decimal(loss)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM