简体   繁体   中英

Pytorch: tensor normalization giving bad result

I have a tensor of longitudes/latitudes that i want to normalize. I want to use this tensor to perform a neural network algorithm on it that returns me the best trip between these different long/lat. I used this function:

from torch.nn.functional import normalize
t=normalize(locations)

This is a lign in my tensor [ 0.0000, 36.4672, 36.4735, 36.4705, 36.4638, 36.4671], [ 0.0000, 10.7637, 10.7849, 10.7822, 10.7821, 10.7637]],

This is after normalization: [0.0000, 0.2181, 0.2181, 0.2181, 0.2179, 0.2179], [0.0000, 0.2186, 0.2194, 0.2194, 0.2196, 0.2188]],

As you can see the result is not good because there are many values repeating and this is affecting my results. Is there another way to normalize my tensor? I'm using pytorch in this project.

This is how torch.nn.functional.normalize works.

In my opinion, you should divide your original tensor value with the maximum value of longitudes/latitudes can have, making the tensor to have values range of [0, 1] .


Additionally, I've tried:

import torch
import torch.nn.functional as F

a = torch.tensor([[0.0000, 36.4672, 36.4735, 36.4705, 36.4638, 36.4671], [ 0.0000, 10.7637, 10.7849, 10.7822, 10.7821, 10.7637]])
res = F.normalize(a)

and the results was:

tensor([[0.0000, 0.4472, 0.4473, 0.4472, 0.4472, 0.4472],
        [0.0000, 0.4467, 0.4476, 0.4475, 0.4475, 0.4467]])

How did you get your results?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM