简体   繁体   English

python moviepy中的多处理

[英]multiprocessing in python moviepy

working on writing video clips as video files in parallel simultaneously on moviepy, without having to wait for the process to complete, 在Moviepy上同时并行地将视频片段作为视频文件写入视频片段,而无需等待该过程完成,

I therefore divide my video into 5-second clips 因此,我将视频分成5秒钟的片段

     n=0
     p = 5
     clip = mp.VideoFileClip(videofile).subclip(n, n+p)

I then add subtitles to the video, 然后,我在视频中添加字幕,

    x = 0
    text2 = 'hello'+str(x)
    text[x] = TextClip(text2, font='Amiri-regular',color='white',fontsize=24).set_duration(p).set_start(0)

I then do this again and again for the first five clips, and on the fifth one, I write the clip as a video file 然后,我对前五个剪辑一次又一次地执行此操作,在第五个剪辑中,我将该剪辑作为视频文件写入

I want to continue processing the rest of the video as the writing continues in the background, so I use multiprocessing, after editing the code suggested by @Roland Smith, I use: 我要继续处理视频的其余部分,因为写作会在后台继续进行,因此在编辑@Roland Smith建议的代码后,我使用了多重处理,我使用:

if float.is_integer(float(x)/5.0) == True and x != 0:
    text2 = concatenate(text.values())
    textd = text2.on_color(size=(clip.w ,text2.h),color=(0,0,0), col_opacity=0.6).set_pos('bottom')
    video3[n] = CompositeVideoClip([VideoFileClip(videofile).subclip(n,5+n), textd])



def audioclip(data):
    outname = str(data)[-10:].strip('>') + '.mp4'
    data.write_videofile(outname,fps=24, codec='libx264')       
    return outname

names = video3.values()
h = multiprocessing.Pool()
audiofiles = h.map(audioclip, names)
gc.collect()
n = n+p
x = x+1

I had imported 我已经进口了

    from moviepy.editor import *
    import moviepy.editor as mp
    import os 
    import multiprocessing
    from multiprocessing import pool

however, I get this error: 但是,我收到此错误:

     Traceback (most recent call last):
      File "p2 (copy).py", line 128, in <module>
audiofiles = h.map(audioclip, names)
      File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
      File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
    cPickle.PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

please help 请帮忙

Your question isn't completely clear, but I thought I'd show you how to do the audio extraction in parallel. 您的问题尚不完全清楚,但我想我会向您展示如何并行进行音频提取。

The first code fragment you give chould be reworked into a function. 您提供的第一个代码片段应重新设计为一个函数。

import moviepy.editor as mp
from multiprocessing import pool

def audioclip(input):
    clip = mp.VideoFileClip(input).subclip(0,20)
    outname = input[:-3] + 'mp3'
    clip.audio.write_audiofile(outname)
    return outname

Furthermore, you'll need a list of input filenames. 此外,您将需要输入文件名列表。 Then using the map method of the multiprocessing.Pool object, you apply the abovementioned function to all videos. 然后,使用multiprocessing.Pool对象的map方法,将上述函数应用于所有视频。

# You should probably take the names from the command line...
names = ['foo.mp4', 'bar.mp4', 'spam.mp4', 'eggs.mp4']

p = multiprocessing.Pool()
audiofiles = p.map(audioclip, names)

This will extract audio from the clips in parallel, using as many worker processes as your CPU has cores by default. 默认情况下,这将从剪辑中并行提取音频,所使用的工作进程数量与CPU具有内核的数量相同。

Edit : Note that if you use map , the items of the iterable have to be pickled and sent to the worker process. 编辑 :请注意,如果使用map ,则必须对可迭代项进行腌制并将其发送到工作进程。 To prevent this from using a lot of resources it is better to eg send the name of a big file to the worker process (so it can read the file itself) rather than the contents of that file. 为了避免使用大量资源,最好将一个大文件的名称发送给工作进程(这样它就可以读取文件本身)而不是该文件的内容 Otherwise this would fast become a bottleneck in the process. 否则,这将很快成为该过程的瓶颈。 This might look wasteful if multiple workers all have to read the same file, but the filesystem caching that all modern operating systems do should mitigate that. 如果多个工作人员都必须读取同一个文件,这可能看起来很浪费,但是所有现代操作系统都需要进行文件系统缓存,这可以缓解这种情况。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM