简体   繁体   中英

Mapping a rectangle to a quad with Pillow

I'm trying to write a Python program that takes an input image (eg, JPEG) and produces a "globe assembly" output image, similar to le Paper Globe . In essence, if the output image is printed, cut, folded and glued, one should obtain the original image projected onto a rough sphere.

The program would divide the input image in 32 (8 horizontal, 4 vertical) rectangles, then map each rectangle onto some carefully chosen trapezoids or, more generally, quads. I found a Pillow/PIL method that maps a quad onto a square , but couldn't find a way to map a rectangle onto a quad.

Does anyone know how to map a rectangle of an input image onto a quad of an output image in Python? I have a preference for Pillow/PIL, but whatever library which can open and save JPEGs is fine.

Basically, you'd need some perspective transform to accomplish that. Pillow has Image.transform for that. You'd need to calculate all necessary parameters beforehand, ie the homographic transform, cf. this Q&A . I personally would use OpenCV's warpPerspective , and get the transformation matrix by using getPerspectiveTransform , such that you only need to provide four points in the source image, and four points in the destination image. This other Q&A had a good quick start on that.

Before we go into detail, I just wanted to be sure, that the following is, what you want achieve:

输出

So, the full algorithm would be:

  1. Load your source image, and the dedicated output image which has some quad using Pillow. I assume a black quad on a white background.
  2. Convert the images to NumPy arrays to be able to work with OpenCV.
  3. Set up the source points. These are just the corners of your region of interest (ROI).
  4. Find – or know – the destination points. These are the corners of your quad. Finding these automatically can become quite difficult, because the order must be the same as set up for the ROI points.
  5. Get the transformation matrix, and apply the actual perspective transform.
  6. Copy the desired parts of the warped image to the quad of the initial output image.
  7. Convert back to some Pillow image and save.

And, here's the full code, including some visualization:

import cv2
import numpy as np
from PIL import Image, ImageDraw

# Input image to get rectangle (region of interest, roi) from
image = Image.open('path/to/your/image.png')
roi = ((100, 30), (300, 200))

# Dummy output image with some quad to paste to
output = Image.new('RGB', (600, 800), (255, 255, 255))
draw = ImageDraw.Draw(output)
draw.polygon(((100, 20), (40, 740), (540, 350), (430, 70)), outline=(0, 0, 0))

# Convert images to NumPy arrays for processing in OpenCV
image_cv2 = np.array(image)
output_cv2 = np.array(output)

# Source points, i.e. roi in input image
tl = (roi[0][0], roi[0][1])
tr = (roi[1][0], roi[0][1])
br = (roi[1][0], roi[1][1])
bl = (roi[0][0], roi[1][1])
pts = np.array([bl, br, tr, tl])

# Find (or know) target points in output image w.r.t. the quad
# Attention: The order must be the same as defined by the roi points!
tl_dst = (100, 20)
tr_dst = (430, 70)
br_dst = (540, 350)
bl_dst = (40, 740)
dst_pts = np.array([bl_dst, br_dst, tr_dst, tl_dst])

# Get transformation matrix, and warp image
pts = np.float32(pts.tolist())
dst_pts = np.float32(dst_pts.tolist())
M = cv2.getPerspectiveTransform(pts, dst_pts)
image_size = (output_cv2.shape[1], output_cv2.shape[0])
warped = cv2.warpPerspective(image_cv2, M, dsize=image_size)

# Get mask from quad in output image, and copy content from warped image
gray = cv2.cvtColor(output_cv2, cv2.COLOR_BGR2GRAY)
gray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY_INV)[1]
cnts = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
mask = np.zeros_like(output_cv2)
mask = cv2.drawContours(mask, cnts, 0, (255, 255, 255), cv2.FILLED)
mask = mask.all(axis=2)
output_cv2[mask, :] = warped[mask, :]

# Transform back to PIL images
output_new = Image.fromarray(output_cv2)
output_new.save('final_output.jpg')

# Just for visualization
import matplotlib.pyplot as plt
draw = ImageDraw.Draw(image)
draw.rectangle(roi, outline=(255, 0, 0), width=3)
plt.figure(0, figsize=(18, 9))
plt.subplot(1, 3, 1), plt.imshow(image), plt.title('Input with ROI')
plt.subplot(1, 3, 2), plt.imshow(output), plt.title('Output with quad')
plt.subplot(1, 3, 3), plt.imshow(output_new), plt.title('Final output')
plt.tight_layout(), plt.show()

On step #4, automatically finding the destination points, you could do something like this:

# Find target points in output image w.r.t. the quad
gray = cv2.cvtColor(output_cv2, cv2.COLOR_BGR2GRAY)
gray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY_INV)[1]
cnts = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
approx = cv2.approxPolyDP(cnts[0], 0.03 * cv2.arcLength(cnts[0], True), True)

That's basically finding the contour(s) in the image, and approximating the corners. You'd still need to find the right order of the resulting points...

----------------------------------------
System information
----------------------------------------
Platform:      Windows-10-10.0.16299-SP0
Python:        3.8.5
Matplotlib:    3.3.3
NumPy:         1.19.5
OpenCV:        4.5.1
Pillow:        8.1.0
----------------------------------------

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM