简体   繁体   English

如何将图像合并为透明层?

[英]How to merge images as transparent layers?

I am working on video editor for raspberry pi, and I have a problem with speed of placing image over image. 我正在为Raspberry Pi开发视频编辑器,但是在将图像放在图像上的速度有问题。 Currently, using imagemagick it takes up to 10 seconds just to place one image over another, using 1080x1920 png images, on raspberry pi, and that's way too much. 目前,使用imagemagick最多需要10秒钟才能在树莓派上使用1080x1920 png图像将一幅图像放置在另一幅图像上,这太多了。 With the number of images time goes up as well. 随着图像数量的增加,时间也会增加。 Any ideas on how to speed it up? 关于如何加快速度的任何想法? Imagemagick code: Imagemagick代码:

composite -blend 90 img1.png img2.png new.png

Video editor with yet slow opacity support here 视频编辑器,但慢不透明支持这里

--------EDIT-------- - - - - 编辑 - - - -

slightly faster way: 稍微快一点的方法:

import numpy as np
from PIL import Image
size_X, size_Y = 1920, 1080#  put images resolution, else output may look wierd
image1 = np.resize(np.asarray(Image.open('img1.png').convert('RGB')), (size_X, size_Y, 3))
image2 = np.resize(np.asarray(Image.open('img2.png').convert('RGB')), (size_X, size_Y, 3))
output = image1*transparency+image2*(1-transparency)
Image.fromarray(np.uint8(output)).save('output.png')

My Raspberry Pi is unavailable at the moment - all I am saying is that there was some smoke involved and I do software, not hardware! 我的Raspberry Pi目前无法使用-我要说的是卷入了烟雾,我在做软件,而不是硬件! As a result, I have only tested this on a Mac. 结果,我只在Mac上进行了测试。 It uses Numba . 它使用Numba

First I used your Numpy code on these 2 images: 首先,我在这2张图片上使用了您的Numpy代码:

在此处输入图片说明

and

在此处输入图片说明

Then I implemented the same thing using Numba . 然后,我使用Numba实现了同样的事情。 The Numba version runs 5.5x faster on my iMac. Numba版本在我的iMac上运行速度是Numba倍。 As the Raspberry Pi has 4 cores, you could try experimenting with: 由于Raspberry Pi具有4个核心,因此您可以尝试进行以下试验:

@jit(nopython=True,parallel=True)
def method2(image1,image2,transparency):
   ...

Here is the code: 这是代码:

#!/usr/bin/env python3

import numpy as np
from PIL import Image

import numba
from numba import jit

def method1(image1,image2,transparency):
   result = image1*transparency+image2*(1-transparency)
   return result

@jit(nopython=True)
def method2(image1,image2,transparency):
   h, w, c = image1.shape
   for y in range(h):
      for x in range(w):
         for z in range(c):
            image1[y][x][z] = image1[y][x][z] * transparency + (image2[y][x][z]*(1-transparency))
   return image1

i1 = np.array(Image.open('image1.jpg').convert('RGB'))
i2 = np.array(Image.open('image2.jpg').convert('RGB'))

res = method1(i1,i2,0.4)
res = method2(i1,i2,0.4)

Image.fromarray(np.uint8(res)).save('result.png')

The result is: 结果是:

在此处输入图片说明

Other thoughts... I did the composite in-place, overwriting the input image1 to try and save cache space. 其他想法...我就地进行了合成,覆盖了输入image1以尝试节省缓存空间。 That may help or hinder - please experiment. 这可能会有所帮助或阻碍-请尝试。 I may not have processed the pixels in the optimal order - please experiment. 我可能没有按照最佳顺序处理像素-请尝试。

Just as another option, I tried in pyvips (full disclosure: I'm the pyvips maintainer, so I'm not very neutral): 另一个选择是,我尝试了pyvips (完整披露:我是pyvips的维护者,所以我不是很中立):

#!/usr/bin/python3

import sys
import time
import pyvips

start = time.time()

a = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
b = pyvips.Image.new_from_file(sys.argv[2], access="sequential")
out = a * 0.2 + b * 0.8
out.write_to_file(sys.argv[3])

print("pyvips took {} milliseconds".format(1000 * (time.time() - start)))

pyvips is a "pipeline" image processing library, so that code will execute the load, processing and save all in parallel. pyvips是一个“管道”图像处理库,因此代码将并行执行加载,处理和保存。

On this two core, four thread i5 laptop using Mark's two test images I see: 在使用Mark的两个测试图像的两核,四线程i5笔记本电脑上,我看到了:

$ ./overlay-vips.py blobs.jpg ships.jpg x.jpg
took 39.156198501586914 milliseconds

So 39ms for two jpg loads, processing and one jpg save. 因此,两个jpg加载,处理和一个jpg保存需要39ms。

You can time just the blend part by copying the source images and the result to memory, like this: 您可以通过将源图像和结果复制到内存中来对混合部分进行计时,如下所示:

a = pyvips.Image.new_from_file(sys.argv[1]).copy_memory()
b = pyvips.Image.new_from_file(sys.argv[2]).copy_memory()

start = time.time()
out = (a * 0.2 + b * 0.8).copy_memory()
print("pyvips between memory buffers took {} milliseconds"
        .format(1000 * (time.time() - start)))

I see: 我懂了:

$ ./overlay-vips.py blobs.jpg ships.jpg x.jpg 
pyvips between memory buffers took 15.432596206665039 milliseconds

numpy is about 60ms on this same test. 在同一测试中,numpy大约为60ms。

I tried a slight variant of Mark's nice numba example: 我尝试了Mark的漂亮numba示例的一个轻微变体:

#!/usr/bin/python3

import sys
import time
import numpy as np
from PIL import Image

import numba
from numba import jit, prange

@jit(nopython=True, parallel=True)
def method2(image1, image2, transparency):
   h, w, c = image1.shape
   for y in prange(h):
      for x in range(w):
         for z in range(c):
            image1[y][x][z] = image1[y][x][z] * transparency \
                    + (image2[y][x][z] * (1 - transparency))
   return image1

# run once to force a compile
i1 = np.array(Image.open(sys.argv[1]).convert('RGB'))
i2 = np.array(Image.open(sys.argv[2]).convert('RGB'))
res = method2(i1, i2, 0.2)

# run again and time it
i1 = np.array(Image.open(sys.argv[1]).convert('RGB'))
i2 = np.array(Image.open(sys.argv[2]).convert('RGB'))

start = time.time()
res = method2(i1, i2, 0.2)
print("numba took {} milliseconds".format(1000 * (time.time() - start)))

Image.fromarray(np.uint8(res)).save(sys.argv[3])

And I see: 我看到:

$ ./overlay-numba.py blobs.jpg ships.jpg x.jpg 
numba took 8.110523223876953 milliseconds

So on this laptop, numba is about 2x faster than pyvips. 因此,在这台笔记本电脑上,numba比pyvips快约2倍。

If you time load and save as well, it's quite a bit slower: 如果同时加载和保存时间,则速度会慢很多:

$ ./overlay-numba.py blobs.jpg ships.jpg x.jpg 
numba plus load and save took 272.8157043457031 milliseconds

But that seems unfair, since almost all that time is in PIL load and save. 但这似乎是不公平的,因为几乎所有时间都在PIL加载和保存中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM