简体   繁体   中英

OpenGL to video on iPhone

I'm currently working on a project to convert a physics simulation to a video on the iPhone itself.

To do this, I'm presently using two different loops. The first loop runs in the block where the AVAssetWriterInput object polls the EAGLView for more images. The EAGLView provides the images from an array where they are stored.

The other loop is the actual simulation. I've turned off the simulation timer, and am calling the tick myself with a pre-specified time difference every time. Everytime a tick gets called, I create a new image in EAGLView's swap buffers method after the buffers have been swapped. This image is then placed in the array that AVAssetWriter polls.

There is also some miscellaneous code to make sure the array doesn't get too big

All of this works fine, but is very very slow.

Is there something I'm doing that is, conceptually, causing the entire process to be slower than it could be? Also, does anyone know of a faster way to get an image out of Open GL than glReadPixels?

Video memory is designed so, that it's fast for writing and slow for reading. That's why I perform rendering to texture. Here is the entire method that I've created for rendering the scene to texture (there are some custom containers, but I think it's pretty straightforward to replace them with your own):

-(TextureInf*) makeSceneSnapshot {
    // create texture frame buffer
    GLuint textureFrameBuffer, sceneRenderTexture;

    glGenFramebuffersOES(1, &textureFrameBuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);

    // create texture to render scene to
    glGenTextures(1, &sceneRenderTexture);
    glBindTexture(GL_TEXTURE_2D, sceneRenderTexture);

    // create TextureInf object
    TextureInf* new_texture = new TextureInf();
    new_texture->setTextureID(sceneRenderTexture);
    new_texture->real_width = [self viewportWidth];
    new_texture->real_height = [self viewportHeight];

    //make sure the texture dimensions are power of 2
    new_texture->width = cast_to_power(new_texture->real_width, 2);
    new_texture->height = cast_to_power(new_texture->real_height, 2);

    //AABB2 = axis aligned bounding box (2D)
    AABB2 tex_box;

    tex_box.p1.x = 1 - (GLfloat)new_texture->real_width / (GLfloat)new_texture->width;
    tex_box.p1.y = 0;
    tex_box.p2.x = 1;
    tex_box.p2.y = (GLfloat)new_texture->real_height / (GLfloat)new_texture->height;
    new_texture->setTextureBox(tex_box);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,  new_texture->width, new_texture->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, sceneRenderTexture, 0);

    // check for completness
    if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
        new_texture->release();
        @throw [NSException exceptionWithName: EXCEPTION_NAME
                                       reason: [NSString stringWithFormat: @"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)]
                                     userInfo: nil];
        new_texture = nil;
    } else {
        // render to texture
        [self renderOneFrame];
    }

    glDeleteFramebuffersOES(1, &textureFrameBuffer);

    //restore default frame and render buffers
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, _defaultFramebuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
    glEnable(GL_BLEND);         
    [self updateViewport];      
    glMatrixMode(GL_MODELVIEW);


    return new_texture;
}

Of course, if you're doing snapshots all the time, then you'd better create texture frame and render buffers only once (and allocate memory for them).

One thing to remember is that the GPU is running asynchronously from the CPU, so if you try to do glReadPixels immediately after you finish rendering, you'll have to wait for commands to be flushed to the GPU and rendered before you can read them back.

Instead of waiting synchronously, render snapshots into a queue of textures (using FBOs like Max mentioned). Wait until you've rendered a couple more frames before you deque one of the previous frames. I don't know if the iPhone supports fences or sync objects, but if so you could check those to see if rendering has finished before reading the pixels.

You could try using a CADisplayLink object to ensure that your drawing rate and your capture rate correspond to the device's screen refresh rate. You might be slowing down the execution time of the run loop by refreshing and capturing too many times per device screen refresh.

Depending on your app's goals, it might not be necessary for you to capture every frame that you present, so in your selector, you could choose whether or not to capture the current frame.

While the question isn't new, it's not answered yet so I thought I'd pitch in.

glReadPixels is indeed very slow, and therefore cannot be used to record video from an OpenGL-application without adversly affecting performance.

We did find a workaround, and have created a free SDK called Everyplay that can record OpenGL-based graphics to a video file, without performance loss. You can check it out at https://developers.everyplay.com/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM