简体   繁体   中英

How to create a customisable variational Autoencoder using pytorch, and see the latent space of it

I'm trying to develop a customisable VAE such that I just give it an array of how many hidden layers I want, and the number of neurons within each layer, and it'll make the network. I also want to be able to see what the output of the bottleneck would be.

Heres what I have so far for a vanilla autoencoder:

class MultipleFC(nn.Module):
    def __init__(self, data_shape, N):
        super(MultipleFC, self).__init__()
        self.N = N #shape of the Autoencoder excluding initial and final layers
        self.N.append(data_shape)
        self.N.insert(0, data_shape)
        #print(self.N)

        self.layers = nn.ModuleList([nn.Linear(N[n], N[n+1]) for n in range(len(N)-1)])

    def forward(self, x):
        y = torch.empty_like(x)
        for i, fc in enumerate(self.layers):
            print(type(i),type(fc))
            y[..., i, :] = fc(x[..., i, :])
        return y


    def encode(self, x):
        h1 = F.relu(self.fc1(x))
        h2 = F.relu(self.fc2(h1))
        return self.fc31(h2), self.fc32(h2)

Then, in order to create the model, I would do something like:

model = MultipleFC(100, [50,10,2,10,50])

No idea how to get the output of the bottleneck from them.

First, what's the point of self.N ? You don't use it anywhere

Second, didn't get to much into your forward function, don't know if it works but it's deffiently unnecessarily complicated, you can just forward x through each layer and return x at the end. (you can create a copy of x if you want) no need for y.

Third, the encoding of a vector is just passing it until the middle layer. Make the encode function the same as the forward function but here don't go over all the layers, just until the middle one (the smallest one)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM