简体   繁体   中英

Deep learning training with nonidentical images?

[.[enter image description here][1]][1]I am actually reconstructing some images using dual photography, Next. I want to train a network to reconstruct clear images by removing noise (Denoising autoencoder).

The input for training the network is reconstructed images, whereas, the output is ground truth or computer based standard test images. Now the input eg, Lena is some how not exact version of Lena with image shifted in positions and some artifacts.

If I keep input as my reconstructed image and training output as Lena test image (computer standard test image), will it work? I only want to know if input/output shifted or some details missing in one of them (due to some cropping) would work.

It depends on many factors like your images for training and the architecture of the network.

However, what you want to do is to make a network that learns the noise or low level information and for this purpose Generative Adversarial Networks (GAN) are very popular. You can read about them here . Maybe, after you have tried your approach and if the results are not satisfactory then try using GANs, like, DCGAN (Deep Convolution GAN).

Also, share your outcomes with the community if you would like.

Denoising Autoencoders! Love it!

There is no reason for not training your model with those images. The autoencoder, if well trained, will eventually learn the transformation if there is enough data.

However, if you have the 'positive' images, I strongly recommend you to create your own noisy images and then train in that controlled working area. You will simplify your problem and it will be easier to solve.

What is stopping you from doing just that?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM