简体   繁体   中英

Seam issue when mapping a texture to a sphere in OpenGL

I'm trying to create geometry to represent the Earth in OpenGL. I have what's more or less a sphere (closer to the elliptical geoid that Earth is though). I map a texture of the Earth's surface (that's probably a mercator projection or something similar). The texture's UV coordinates correspond to the geometry's latitude and longitude. I have two issues that I'm unable to solve. I am using OpenSceneGraph but I think this is a general OpenGL / 3D programming question.

  • There's a texture seam that's very apparent. I'm sure this occurs because I don't know how to map the UV coordinates to XYZ where the seam occurs. I only map UV coords up to the last vertex before wrapping around... You'd need to map two different UV coordinates to the same XYZ vertex to eliminate the seam. Is there a commonly used trick to get around this, or am I just doing it wrong?

  • There's crazy swirly distortion going on at the poles. I'm guessing this because I map a single UV point at the poles (for Earth, I use [0.5,1] for the North Pole, and [0.5,0] for the South Pole). What else would you do though? I can sort of live with this... but its extremely noticeable at lower resolution meshes.

I've attached an image to show what I'm talking about.

我很害怕渲染地球

The general way this is handled is by using a cube map , not a 2D texture.

However, if you insist on using a 2D texture, you have to create a break in your mesh's topology. The reason you get that longitudinal line is because you have one vertex with a texture coordinate of something like 0.9 or so, and its neighboring vertex has a texture coordinate of 0.0. What you really want is that the 0.9 one neighbors a 1.0 texture coordinate.

Doing this means replicating the position down one line of the sphere. So you have the same position used twice in your data. One is attached to a texture coordinate of 1.0 and neighbors a texture coordinate of 0.9. The other has a texture coordinate of 0.0, and neighbors a vertex with 0.1.

Topologically, you need to take a longitudinal slice down your sphere.

Your link really helped me out, furqan, thanks.
Why couldn't you figure it out? A point where I stumbled was, that I didn't know you can exceed the [0,1] interval when calculating the texture coordinates. That makes it a lot easier to jump from one side of the texture to the other with OpenGL doing all the interpolation and without having to calculate the exact position where the texture actually ends.

It took a long time to figure this extremely annoying issue out. I'm programming with C# in Unity and I didn't want to duplicate any vertices. (Would cause future issues with my concept) So I went with the shader idea and it works out pretty well. Although I'm sure the code could use some heavy duty optimization, I had to figure out how to port it over to CG from this but it works. This is in case someone else runs across this post, as I did, looking for a solution to the same problem.

    Shader "Custom/isoshader" {
Properties {
        decal ("Base (RGB)", 2D) = "white" {}
    }
    SubShader {
        Pass {
        Fog { Mode Off }

        CGPROGRAM

        #pragma vertex vert
        #pragma fragment frag
        #define PI 3.141592653589793238462643383279

        sampler2D decal;

        struct appdata {
            float4 vertex : POSITION;
            float4 texcoord : TEXCOORD0;
        };

        struct v2f {
            float4 pos : SV_POSITION;
            float4 tex : TEXCOORD0;
            float3 pass_xy_position : TEXCOORD1;
        };

        v2f vert(appdata v){
            v2f  o;
            o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
            o.pass_xy_position = v.vertex.xyz;
            o.tex = v.texcoord;
            return o;
        }

        float4 frag(v2f i) : COLOR {
            float3 tc = i.tex;
            tc.x = (PI + atan2(i.pass_xy_position.x, i.pass_xy_position.z)) / (2 * PI);
            float4 color = tex2D(decal, tc);
            return color;
        }

        ENDCG
    }
}

}

As Nicol Bolas said, some triangles have UV coordinates going from ~0.9 back to 0, so the interpolation messes the texture around the seam. In my code, I've created this function to duplicate the vertices around the seam. This will create a sharp line splitting those vertices. If your texture has only water around the seam (the Pacific ocean?), you may not notice this line. Hope it helps.

/**
 *  After spherical projection, some triangles have vertices with
 *  UV coordinates that are far away (0 to 1), because the Azimuth
 *  at 2*pi = 0. Interpolating between 0 to 1 creates artifacts
 *  around that seam (the whole texture is thinly repeated at
 *  the triangles around the seam).
 *  This function duplicates vertices around the seam to avoid
 *  these artifacts.
 */
void PlatonicSolid::SubdivideAzimuthSeam() {
    if (m_texCoord == NULL) {
        ApplySphericalProjection();
    }

    // to take note of the trianges in the seam
    int facesSeam[m_numFaces];

    // check all triangles, looking for triangles with vertices
    // separated ~2π. First count.
    int nSeam = 0;
    for (int i=0;i < m_numFaces; ++i) {
        // check the 3 vertices of the triangle
        int a = m_faces[3*i];
        int b = m_faces[3*i+1];
        int c = m_faces[3*i+2];
        // just check the seam in the azimuth
        float ua = m_texCoord[2*a];
        float ub = m_texCoord[2*b];
        float uc = m_texCoord[2*c];
        if (fabsf(ua-ub)>0.5f || fabsf(ua-uc)>0.5f || fabsf(ub-uc)>0.5f) {
            //test::printValue("Face: ", i, "\n");
            facesSeam[nSeam] = i;
            ++nSeam;
        }
    }

    if (nSeam==0) {
        // no changes
        return;
    }

    // reserve more memory
    int nVertex = m_numVertices;
    m_numVertices += nSeam;
    m_vertices = (float*)realloc((void*)m_vertices, 3*m_numVertices*sizeof(float));
    m_texCoord = (float*)realloc((void*)m_texCoord, 2*m_numVertices*sizeof(float));

    // now duplicate vertices in the seam
    // (the number of triangles/faces is the same)
    for (int i=0; i < nSeam; ++i, ++nVertex) {
        int t = facesSeam[i]; // triangle index
        // check the 3 vertices of the triangle
        int a = m_faces[3*t];
        int b = m_faces[3*t+1];
        int c = m_faces[3*t+2];
        // just check the seam in the azimuth
        float u_ab = fabsf(m_texCoord[2*a] - m_texCoord[2*b]);
        float u_ac = fabsf(m_texCoord[2*a] - m_texCoord[2*c]);
        float u_bc = fabsf(m_texCoord[2*b] - m_texCoord[2*c]);
        // select the vertex further away from the other 2
        int f = 2;
        if (u_ab >= 0.5f && u_ac >= 0.5f) {
            c = a;
            f = 0;
        } else if (u_ab >= 0.5f && u_bc >= 0.5f) {
            c = b;
            f = 1;
        }

        m_vertices[3*nVertex] = m_vertices[3*c];      // x
        m_vertices[3*nVertex+1] = m_vertices[3*c+1];  // y
        m_vertices[3*nVertex+2] = m_vertices[3*c+2];  // z
        // repeat u from texcoord
        m_texCoord[2*nVertex] = 1.0f - m_texCoord[2*c];
        m_texCoord[2*nVertex+1] = m_texCoord[2*c+1];
        // change this face so all the vertices have close UV
        m_faces[3*t+f] = nVertex;
    }

}

You can also go a dirty way: interpolate X,Y positions in between vertex shader and fragment shader and recalculate correct texture coordinate in fragment shader. This may be somewhat slower, but it doesn't involve duplicate vertexes and it's simplier, I think.

For example:
vertex shader:

#version 150 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
out vec2 pass_xy_position;
void main(void) {
    gl_Position = projM * viewM * modelM * in_Position;
    pass_xy_position = in_Position.xy; // 2d spinning interpolates good!
    pass_TextureCoord = in_TextureCoord;
}

fragment shader:

#version 150 core
uniform sampler2D texture1;
in vec2 pass_xy_position;
in vec2 pass_TextureCoord;
out vec4 out_Color;

#define PI 3.141592653589793238462643383279

void main(void) {
    vec2 tc = pass_TextureCoord;
    tc.x = (PI + atan(pass_xy_position.y, pass_xy_position.x)) / (2 * PI); // calculate angle and map it to 0..1
    out_Color = texture(texture1, tc);
}

One approach is like in the accepted answer. In the code generating the array of vertex attributes you will have a code like this:

// FOR EVERY TRIANGLE
const float threshold = 0.7;
if(tcoords_1.s > threshold || tcoords_2.s > threshold || tcoords_3.s > threshold)
{
    if(tcoords_1.s < 1. - threshold)
    {
        tcoords_1.s += 1.;
    }
    if(tcoords_2.s < 1. - threshold)
    {
        tcoords_2.s += 1.;
    }
    if(tcoords_3.s < 1. - threshold)
    {
        tcoords_3.s += 1.;
    }
}

If you have triangles which are not meridian-aligned you will also want glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); . You also need to use glDrawArrays since vertices with the same position will have different texture coords.

I think the better way to go is to eliminate the root of all evil, which is texture coords interpolation in this case. Since you know basically all about your sphere/ellipsoid, you can calculate texture coords, normals, etc. in the fragment shader based on position. This means that your CPU code generating vertex attributes will be much simpler and you can use indexed drawing again. And I don't think this approach is dirty. It's clean.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM