简体   繁体   中英

Translate image using PIL for Deep Learning

I am using Pillow in order to achieve data augmentation on a dataset for a road recognition neural network. I wanted to do a translation (say left to right) on my satellite picture, while transferring the rightmost side of the image (which would be outside of the boundaries) to the leftside, as if you were playing pacman, in order not to lose any information.

I thought about using PIL.Image.AFFINE transform as such:

    import PIL

    def TranslateX(img)
        return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, 0)

which does transform the following image the following image into this one (you can see the black border on the left side) but does not give a satisfying result which would rather look like this

Am I missing something? Does anyone have any idea as to how I could achieve this?

Thank you very much for your time.

Answer provided by Mark Setchel (thank you!)

Turns out I just needed to roll my image... Again, words are important when asking a question. see the doc: https://pillow.readthedocs.io/en/stable/handbook/tutorial.html?highlight=Roll#rolling-an-image

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM