简体   繁体   English

如何使用OpenGL(ES)2 Android加速渲染

[英]How to speed up rendering with OpenGL (ES) 2 Android

I have developed a map using OpenGL-ES on android. 我在Android上使用OpenGL-ES开发了一个地图。 It is displaying my map just fine, and I have just added touch event handling so I can move and fling it around, which is also working. 它显示我的地图很好,我刚刚添加了触摸事件处理,所以我可以移动并扔掉它,这也有效。

However it has a lag time of about 1 second. 但它的延迟时间约为1秒。 I would like the panning of the image obviously to be as smooth as possible. 我希望图像的平移显然尽可能平滑。

I have quite a bit of vector data that I am displaying, but still, there must be alternatives to making the interactivity smoother, I have 17000 polygons (land parcels or lots) and about 1500 lines(road centre lines), they both get pre-loaded into lists that hold FloatBuffers when the application launches. 我有很多我正在展示的矢量数据,但是,必须有替代方案来使交互更顺畅,我有17000个多边形(地块或地块)和大约1500条线(道路中心线),它们都得到预先加载到应用程序启动时保存FloatBuffers的列表中。 When I go to my map activity the renderer iterates through these lists, As you'll see in the code below. 当我进入我的地图活动时,渲染器会遍历这些列表,正如您将在下面的代码中看到的那样。

I would really appreciate some pointers on how I can pick up speed. 我真的很感激有关如何加快速度的一些指示。

(Just on another note, please ignore the scale detector and any rotation code, they are not working, all I am focusing on right now is panning the map.) (在另一个注释中,请忽略比例检测器和任何旋转代码,它们不起作用,我现在关注的是平移地图。)

在此输入图像描述

package com.ANDRRA1.utilities;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.GestureDetector;
import android.view.ScaleGestureDetector;
import android.view.animation.DecelerateInterpolator;
import android.view.animation.Interpolator;

public class CustomGLView extends GLSurfaceView {

    public vboCustomGLRenderer mGLRenderer;

    public CustomGLView(Context context){
        super(context);
    }

    public CustomGLView(Context context, AttributeSet attrs) 
    {
        super(context, attrs);  
    }

    // Hides superclass method.
    public void setRenderer(vboCustomGLRenderer renderer) 
    {
        mGLRenderer = renderer;
        super.setRenderer(renderer);

        super.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
    }

    private static final int INVALID_POINTER_ID = -1;

    private float mPosX;
    private float mPosY;

    private float mLastTouchX;
    private float mLastTouchY;
    private float mLastGestureX;
    private float mLastGestureY;
    private int mActivePointerId = INVALID_POINTER_ID;
    private int mActivePointerId2 = INVALID_POINTER_ID;
    float oL1X1, oL1Y1, oL1X2, oL1Y2;

    private ScaleGestureDetector mScaleDetector = new ScaleGestureDetector(getContext(), new ScaleListener());
    private GestureDetector mGestureDetector = new GestureDetector(getContext(), new GestureListener());

    private float mScaleFactor = 1.f;

    //The following variable control the fling gesture
    private Interpolator animateInterpolator;
    private long startTime;
    private long endTime;
    private float totalAnimDx;
    private float totalAnimDy;
    private float lastAnimDx;
    private float lastAnimDy;

    @Override
    public boolean onTouchEvent(MotionEvent ev) {
        // Let the ScaleGestureDetector inspect all events.
        mScaleDetector.onTouchEvent(ev);
        mGestureDetector.onTouchEvent(ev);

        final int action = ev.getAction();
        switch (action & MotionEvent.ACTION_MASK) {
            case MotionEvent.ACTION_DOWN: {

                if (!mScaleDetector.isInProgress()) {
                    final float x = ev.getX();
                    final float y = ev.getY();

                    mLastTouchX = x;
                    mLastTouchY = y;
                    mActivePointerId = ev.getPointerId(0);
                }
                break;
            }
            case MotionEvent.ACTION_POINTER_DOWN: {
                if (mScaleDetector.isInProgress()) {
                    mActivePointerId2 = ev.getPointerId(1);

                    mLastGestureX = mScaleDetector.getFocusX();
                    mLastGestureY = mScaleDetector.getFocusY();

                    oL1X1 = ev.getX(ev.findPointerIndex(mActivePointerId));
                    oL1Y1 = ev.getY(ev.findPointerIndex(mActivePointerId));
                    oL1X2 = ev.getX(ev.findPointerIndex(mActivePointerId2));
                    oL1Y2 = ev.getY(ev.findPointerIndex(mActivePointerId2));
                }
                break;
            }

            case MotionEvent.ACTION_MOVE: {

                // Only move if the ScaleGestureDetector isn't processing a gesture.
                if (!mScaleDetector.isInProgress()) {
                    final int pointerIndex = ev.findPointerIndex(mActivePointerId);
                    final float x = ev.getX(pointerIndex);
                    final float y = ev.getY(pointerIndex);

                    final float dx = x - mLastTouchX;
                    final float dy = y - mLastTouchY;

                    mPosX += dx;
                    mPosY += dy;

                    mGLRenderer.setEye(dx, dy);
                    requestRender();

                    mLastTouchX = x;
                    mLastTouchY = y;
                }
                else{
                    final float gx = mScaleDetector.getFocusX();
                    final float gy = mScaleDetector.getFocusY();

                    final float gdx = gx - mLastGestureX;
                    final float gdy = gy - mLastGestureY;

                    mPosX += gdx;
                    mPosY += gdy;

                    mLastGestureX = gx;
                    mLastGestureY = gy;
                }

                break;
            }

            case MotionEvent.ACTION_UP: {
                mActivePointerId = INVALID_POINTER_ID;

                break;
            }
            case MotionEvent.ACTION_CANCEL: {
                mActivePointerId = INVALID_POINTER_ID;
                break;
            }
            case MotionEvent.ACTION_POINTER_UP: {

                final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK) 
                        >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
                final int pointerId = ev.getPointerId(pointerIndex);
                if (pointerId == mActivePointerId) {
                    // This was our active pointer going up. Choose a new
                    // active pointer and adjust accordingly.
                    final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
                    mLastTouchX = ev.getX(newPointerIndex);
                    mLastTouchY = ev.getY(newPointerIndex);
                    mActivePointerId = ev.getPointerId(newPointerIndex);
                }
                else{
                    final int tempPointerIndex = ev.findPointerIndex(mActivePointerId);
                    mLastTouchX = ev.getX(tempPointerIndex);
                    mLastTouchY = ev.getY(tempPointerIndex);
                }

                break;
            }
        }

        return true;
    }

    private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
        @Override
        public boolean onScale(ScaleGestureDetector detector) {
            mScaleFactor *= detector.getScaleFactor();

            // Don't let the object get too small or too large.
            mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 10000.0f));

            //invalidate();
            return true;
        }
    }

    private class GestureListener extends GestureDetector.SimpleOnGestureListener {
        @Override
        public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) {

            if (e1 == null || e2 == null){
                return false;
            }
            final float distanceTimeFactor = 0.4f;
            final float totalDx = (distanceTimeFactor * velocityX/2);
            final float totalDy = (distanceTimeFactor * velocityY/2);

            onAnimateMove(totalDx, totalDy, (long) (1000 * distanceTimeFactor));
            return true;
        }
    }

    public void onAnimateMove(float dx, float dy, long duration) {
        animateInterpolator = new DecelerateInterpolator();
        startTime = System.currentTimeMillis();
        endTime = startTime + duration;
        totalAnimDx = dx;
        totalAnimDy = dy;
        lastAnimDx = 0;
        lastAnimDy = 0;

        post(new Runnable() {
            @Override
            public void run() {
                onAnimateStep();
            }
        });
    }

    private void onAnimateStep() {
        long curTime = System.currentTimeMillis();
        float percentTime = (float) (curTime - startTime) / (float) (endTime - startTime);
        float percentDistance = animateInterpolator.getInterpolation(percentTime);
        float curDx = percentDistance * totalAnimDx;
        float curDy = percentDistance * totalAnimDy;

        float diffCurDx = curDx - lastAnimDx;
        float diffCurDy = curDy - lastAnimDy;
        lastAnimDx = curDx;
        lastAnimDy = curDy;

        doAnimation(diffCurDx, diffCurDy);

        if (percentTime < 1.0f) {
            post(new Runnable() {
                @Override
                public void run() {
                    onAnimateStep();
                }
            });
        }
    }

    public void doAnimation(float diffDx, float diffDy) {
        mPosX += diffDx;
        mPosY += diffDy;

        mGLRenderer.setEye(diffDx, diffDy);
        requestRender();
    }

    public float angleBetween2Lines(float L1X1, float L1Y1, float L1X2, float L1Y2, float L2X1, float L2Y1, float L2X2, float L2Y2)
    {
        float angle1 = (float) Math.atan2(L1Y1 - L1Y2, L1X1 - L1X2);
        float angle2 = (float) Math.atan2(L2Y1 - L2Y2, L2X1 - L2X2);

        float angleDelta = findAngleDelta( (float)Math.toDegrees(angle1), (float)Math.toDegrees(angle2));
        return -angleDelta;
    }

    private float findAngleDelta( float angle1, float angle2 )
    {
        return angle1 - angle2;
    }
}

.

package com.ANDRRA1.utilities;

import java.nio.FloatBuffer;
import java.util.ListIterator;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;

public class vboCustomGLRenderer  implements GLSurfaceView.Renderer {

    /**
     * Store the model matrix. This matrix is used to move models from object space (where each model can be thought
     * of being located at the center of the universe) to world space.
     */
    private float[] mModelMatrix = new float[16];

    /**
     * Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
     * it positions things relative to our eye.
     */
    private float[] mViewMatrix = new float[16];

    /** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
    private float[] mProjectionMatrix = new float[16];

    /** Allocate storage for the final combined matrix. This will be passed into the shader program. */
    private float[] mMVPMatrix = new float[16];

    /** This will be used to pass in the transformation matrix. */
    private int mMVPMatrixHandle;

    /** This will be used to pass in model position information. */
    private int mPositionHandle;

    /** This will be used to pass in model color information. */
    private int mColorUniformLocation;

    /** How many bytes per float. */
    private final int mBytesPerFloat = 4;   

    /** Offset of the position data. */
    private final int mPositionOffset = 0;

    /** Size of the position data in elements. */
    private final int mPositionDataSize = 3;

    /** How many elements per vertex for double values. */
    private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;

    // geometry types
    private final byte wkbPoint = 1;
    private final byte wkbLineString = 2;
    private final byte wkbPolygon = 3;
    //private final byte wkbMultiPoint = 4;
    //private final byte wkbMultiLineString = 5;
    //private final byte wkbMultiPolygon = 6;
    //private final byte wkbGeometryCollection = 7;

    // Big Endian
    final int wkbXDR = 0;
    // Little Endian
    final int wkbNDR = 1;


    float count = 0;

    // Position the eye behind the origin.
    public volatile float eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
    public volatile float eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);

    // Position the eye behind the origin.
    //final float eyeZ = 1.5f;
    public volatile float eyeZ = 1.5f;

    // We are looking toward the distance
    public volatile float lookX = eyeX;
    public volatile float lookY = eyeY;
    public volatile float lookZ = 0.0f;

    // Set our up vector. This is where our head would be pointing were we holding the camera.
    public volatile float upX = 0.0f;
    public volatile float upY = 1.0f;
    public volatile float upZ = 0.0f;


    public vboCustomGLRenderer() {
    }

    public void setEye(float x, float y){

        eyeX -= (x/screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (y/screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the camera position (View matrix)
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
    }

    @Override
    public void onSurfaceCreated(GL10 unused, EGLConfig config) {


        Thread.currentThread().setPriority(Thread.MIN_PRIORITY);

        // Set the background frame color
        //White
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        // Set the view matrix. This matrix can be said to represent the camera position.
        // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
        // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);

        final String vertexShader =
            "uniform mat4 u_MVPMatrix;      \n"     // A constant representing the combined model/view/projection matrix.

          + "attribute vec4 a_Position;     \n"     // Per-vertex position information we will pass in.
          + "attribute vec4 a_Color;        \n"     // Per-vertex color information we will pass in.              

          + "varying vec4 v_Color;          \n"     // This will be passed into the fragment shader.

          + "void main()                    \n"     // The entry point for our vertex shader.
          + "{                              \n"
          + "   v_Color = a_Color;          \n"     // Pass the color through to the fragment shader. 
                                                    // It will be interpolated across the triangle.
          + "   gl_Position = u_MVPMatrix   \n"     // gl_Position is a special variable used to store the final position.
          + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in                                                                   
          + "}                              \n";    // normalized screen coordinates.

        final String fragmentShader =
                "precision mediump float;       \n"     // Set the default precision to medium. We don't need as high of a 
                                                        // precision in the fragment shader.                
              + "uniform vec4 u_Color;          \n"     // This is the color from the vertex shader interpolated across the 
                                                        // triangle per fragment.             
              + "void main()                    \n"     // The entry point for our fragment shader.
              + "{                              \n"
              + "   gl_FragColor = u_Color;     \n"     // Pass the color directly through the pipeline.          
              + "}                              \n";                                                

        // Load in the vertex shader.
        int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

        if (vertexShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(vertexShaderHandle, vertexShader);

            // Compile the shader.
            GLES20.glCompileShader(vertexShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(vertexShaderHandle);
                vertexShaderHandle = 0;
            }
        }

        if (vertexShaderHandle == 0)
        {
            throw new RuntimeException("Error creating vertex shader.");
        }

        // Load in the fragment shader shader.
        int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

        if (fragmentShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

            // Compile the shader.
            GLES20.glCompileShader(fragmentShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(fragmentShaderHandle);
                fragmentShaderHandle = 0;
            }
        }

        if (fragmentShaderHandle == 0)
        {
            throw new RuntimeException("Error creating fragment shader.");
        }

        // Create a program object and store the handle to it.
        int programHandle = GLES20.glCreateProgram();

        if (programHandle != 0) 
        {
            // Bind the vertex shader to the program.
            GLES20.glAttachShader(programHandle, vertexShaderHandle);           

            // Bind the fragment shader to the program.
            GLES20.glAttachShader(programHandle, fragmentShaderHandle);

            // Bind attributes
            GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
            GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

            // Link the two shaders together into a program.
            GLES20.glLinkProgram(programHandle);

            // Get the link status.
            final int[] linkStatus = new int[1];
            GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

            // If the link failed, delete the program.
            if (linkStatus[0] == 0) 
            {               
                GLES20.glDeleteProgram(programHandle);
                programHandle = 0;
            }
        }

        if (programHandle == 0)
        {
            throw new RuntimeException("Error creating program.");
        }

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);

    }

    static float mWidth = 0;
    static float mHeight = 0;
    static float mLeft = 0;
    static float mRight = 0;
    static float mTop = 0;
    static float mBottom = 0;
    static float mRatio = 0;
    float screen_width_height_ratio;
    float screen_height_width_ratio;
    final float near = 1.5f;
    final float far = 10.0f;

    double screen_vs_map_horz_ratio = 0;
    double screen_vs_map_vert_ratio = 0;

    @Override
    public void onSurfaceChanged(GL10 unused, int width, int height) {

        // Adjust the viewport based on geometry changes,
        // such as screen rotation
        // Set the OpenGL viewport to the same size as the surface.
        GLES20.glViewport(0, 0, width, height);
        //Log.d("","onSurfaceChanged");

        screen_width_height_ratio = (float) width / height;
        screen_height_width_ratio = (float) height / width;

        //Initialize
        if (mRatio == 0){
            mWidth = (float) width;
            mHeight = (float) height;

            //map height to width ratio
            float map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
            float map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
            float map_width_height_ratio = map_extents_width/map_extents_height;
            //float map_height_width_ratio = map_extents_height/map_extents_width;
            if (screen_width_height_ratio > map_width_height_ratio){
                mRight = (screen_width_height_ratio * map_extents_height)/2;
                mLeft = -mRight;
                mTop = map_extents_height/2;
                mBottom = -mTop;
            }
            else{
                mRight = map_extents_width/2;
                mLeft = -mRight;
                mTop = (screen_height_width_ratio * map_extents_width)/2;
                mBottom = -mTop;
            }

            mRatio = screen_width_height_ratio;
        }

        if (screen_width_height_ratio != mRatio){
            final float wRatio = width/mWidth;
            final float oldWidth = mRight - mLeft;
            final float newWidth = wRatio * oldWidth;
            final float widthDiff = (newWidth - oldWidth)/2;
            mLeft = mLeft - widthDiff;
            mRight = mRight + widthDiff;

            final float hRatio = height/mHeight;
            final float oldHeight = mTop - mBottom;
            final float newHeight = hRatio * oldHeight;
            final float heightDiff = (newHeight - oldHeight)/2;
            mBottom = mBottom - heightDiff;
            mTop = mTop + heightDiff;

            mWidth = (float) width;
            mHeight = (float) height;

            mRatio = screen_width_height_ratio;
        }

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        Matrix.frustumM(mProjectionMatrix, 0, mLeft, mRight, mBottom, mTop, near, far);
    }

    @Override
    public void onDrawFrame(GL10 unused) {

        GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        //The following lists hold the vector data in FloatBuffers pre-loaded from when then application starts
        ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
        while (orgNonAssetCatLayersList_it.hasNext()) {
            mapLayer MapLayer = orgNonAssetCatLayersList_it.next();

            ListIterator<FloatBuffer> mapLayerObjectList_it = MapLayer.objFloatBuffer.listIterator();
            ListIterator<Byte> mapLayerObjectTypeList_it = MapLayer.objTypeArray.listIterator();
            while (mapLayerObjectTypeList_it.hasNext()) {

                switch (mapLayerObjectTypeList_it.next()) {
                    case wkbPoint:
                        break;
                    case wkbLineString:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawLineString(mapLayerObjectList_it.next(), MapLayer.lineStringObjColor);
                        break;
                    case wkbPolygon:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawPolygon(mapLayerObjectList_it.next(), MapLayer.polygonObjColor);
                        break;
                }
            }
        }
    }

    private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(2.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_STRIP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }

    private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(1.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_LOOP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }
}

The other answers here are already very good and show places to look and things to be improved. 这里的其他答案已经非常好,并显示了要查看的地方和需要改进的地方。 I suspect the slowdown is from calling drawPolygon and drawLine thousands of times per draw frame (if you have thousands of polygons and lines), and each one calling OpenGL methods many times per method call. 我怀疑减速是从每个绘制框架调用drawPolygon和drawLine数千次(如果你有数千个多边形和线条),并且每个方法调用多次调用OpenGL方法。 You really want to batch these calls so that you draw all the polygons and all the lines in single separate draw calls. 您真的想要批量调用这些调用,以便在单个单独的绘制调用中绘制所有多边形和所有行。

It's hard to time stuff accurately, OpenGL buffers the calls and even the Android tracer gives inaccurate results, from my experience. 根据我的经验,很难准确地计算时间,OpenGL缓冲调用,甚至Android跟踪器都会给出不准确的结果。 What you can do is try removing and changing the code between runs, time the entire draw loop and see how things change then. 你可以做的是尝试在运行之间删除和更改代码,计算整个绘制循环的时间,然后看看事情是如何变化的。

Try removing Thread.currentThread().setPriority(Thread.MIN_PRIORITY);, and re-engineer the app to put your data into a vertex buffer object, bound with GL_STATIC_DRAW. 尝试删除Thread.currentThread()。setPriority(Thread.MIN_PRIORITY);,并重新设计应用程序以将数据放入顶点缓冲区对象,并与GL_STATIC_DRAW绑定。 Draw all of the lines with a single draw call. 通过一次绘制调用绘制所有行。 To avoid state changes and breaking up draw calls, you can put the color in as an attribute instead of as a uniform. 要避免状态更改并分解绘制调用,可以将颜色作为属性而不是作为一个统一。 You can also calculate and pass in the matrix uniform once per overall draw instead of per line. 您还可以计算并在每个整体绘制中传递一次矩阵,而不是每行。

I suggest you focus on limiting the amount you're drawing based on zoom level and viewport rather than try to optimize rendering efficiency. 我建议您专注于根据缩放级别和视口限制绘制的数量,而不是尝试优化渲染效率。 Chances are, you're not making too many inefficient rendering calls if you're up to 17,000 polygons with decent response time. 如果你有多达17,000个具有相当响应时间的多边形,那么你可能无法进行太多低效的渲染调用。

Look at the picture you posted. 看看你发布的图片。 If you're rendering 17,000 polygons and 1,500 lines there, most the detail wasted since we can't see that level of detail right? 如果你在那里渲染了17,000个多边形和1,500条线,那么大部分细节都被浪费了,因为我们无法看到那么详细的细节吗? I certainly don't see 17,000 polygons. 我当然看不到17,000个多边形。

Instead, keep the full details loaded, and write code to limit detail based on zoom level. 相反,保持加载完整的详细信息,并编写代码以根据缩放级别限制细节。 This approach is unsurprisingly called a level of detail algorithm. 毫不奇怪,这种方法称为详细程度算法。 If you've ever done much with MipMaps its based on the same principal. 如果你曾经使用MipMaps做了很多,它基于相同的主体。

I would calculate level of detail data for all zoom levels you desire, and reference this cached data based on current zoom level. 我会计算您想要的所有缩放级别的详细数据级别,并根据当前缩放级别引用此缓存数据。 When the user isn't at one of your discrete zoom levels, just reference the closest one and scale. 当用户不处于您的一个离散缩放级别时,只需引用最近的缩放级别并进行缩放。

When high detail is needed at closer zoom levels, you can keep things fast by culling what lines and polygons don't need to be rendered in your level of detail data using Spatial Partitioning algorithms. 如果在更接近的缩放级别需要高细节时,您可以通过使用空间分区算法剔除不需要在细节级别数据中渲染的线条和多边形来保持快速。

If you need clarification on any points let me know. 如果您需要澄清任何要点,请告诉我。 This stuff is easy to talk about but tricky to code. 这个东西很容易谈,但很难编码。 Good luck! 祝好运!

EDIT: 编辑:

One LoD implementation would be to calculate your polygon and line positions based on your matrix scaling. 一个LoD实现是根据矩阵缩放计算多边形和线位置。 Then, discard any points which aren't sufficiently far apart. 然后,丢弃任何距离不够远的点。 I'd just cast their floating point positions to ints for a start, and see what it looks like. 我只是将他们的浮点位置投入到一开始,看看它看起来像什么。 Do this for several scaling levels. 为多个缩放级别执行此操作。 Store these results in an array, then round whatever scaling level you're at to select the nearest cached LoD data. 将这些结果存储在一个数组中,然后舍入您所处的任何缩放级别,以选择最近的缓存LoD数据。

A few things leap out a me. 有些事情让我跳了出来。

1) Don't create object in your drawing routines such as onDrawFrame. 1)不要在绘图例程中创建对象,例如onDrawFrame。 Iterators such as 迭代器如

    ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();

create objects and creating objects in your drawing routine hurts performance. 在绘图例程中创建对象和创建对象会损害性能。

2) Minimize OpenGL calls a much as possible. 2)尽可能减少OpenGL调用。 Java still needs to cross the JNI boundary when you do OpenGL calls so if you can put everything in a few large bytebuffers and avoid changing OpenGL state. 当您执行OpenGL调用时,Java仍然需要跨越JNI边界,因此如果您可以将所有内容放在几个大字节缓冲区中并避免更改OpenGL状态。 I would have tried to organize the data into as few a possible buffers that draws the lines and another set the draw the polygons. 我本来试图将数据组织成尽可能少的绘制线条的缓冲区,另一个设置绘制多边形。

You may want to consider only rendering parts of the data at various zoom levels. 您可能只想考虑以各种缩放级别渲染部分数据。 Others may have better ideas and if you look around SO or online I'm sure you'll find them. 其他人可能有更好的想法,如果你环顾四周或在线,我相信你会找到他们。

3) And always measure your performance to see where your real problems are. 3)并始终衡量您的表现,看看您的真实问题在哪里。 Android has a variety of tools ( Traceview Systrace , OpenGL ES Tracer ) available. Android提供了各种工具( Traceview SystraceOpenGL ES Tracer )。

See for more general Android performance tips: http://developer.android.com/training/articles/perf-tips.html 有关更多常规Android性能提示,请参阅: http//developer.android.com/training/articles/perf-tips.html

No one is mentioning FBO's?? 没有人提到FBO的? I mean, LOD would be a good approach, but in some cases you can use FBOs. 我的意思是,LOD是一个很好的方法,但在某些情况下你可以使用FBO。 It would depend on many things, but you should consider them! 这取决于很多事情,但你应该考虑它们!

You can render your hole scene (or part of it) to a frame buffer object and show the image instead of drawing the scene each frame. 您可以将孔场景(或部分场景)渲染到帧缓冲区对象并显示图像,而不是每帧绘制场景。 That reduces the polygon count to just a few (2 for the best case, which is one square). 这将多边形数量减少到几个(最佳情况为2,即一个方格)。

A new problem would be how to handle the panning/zooming as you'd need to recompute the fbo's dynamically. 一个新的问题是如何处理平移/缩放,因为你需要动态地重新计算fbo。 You can just draw a blury image until the fbo is ready or you can take more advanced approaches with some pre-loading, like dividing the map in squares and preload 9 of them (center and 8 neighbors) as google-maps does to load its maps data. 您可以绘制blury图像,直到fbo准备就绪,或者您可以采用更高级的方法进行一些预加载,例如将地图划分为正方形并预加载其中的9个(中心和8个邻居),因为google-maps会加载它映射数据。

You'll also need to take a look at memory consumption, you can't just draw every combination to FBO's. 你还需要看看内存消耗,你不能只是将每个组合绘制到FBO。

I repeat, FBOs are not a "stand-alone" solution, just keep them mind to see if you can use them somewhere! 我再说一遍,FBO不是一个“独立”的解决方案,只要让他们留意,看看你是否可以在某个地方使用它们!

Are you running this in a VM or on a real handset? 你是在虚拟机或真正的手机上运行吗?

Can you add some code to check the rendertime of your functions? 你能添加一些代码来检查你的函数的渲染吗?

Try to find out what is taking time that way. 试着找出那样花时间的东西。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM