简体   繁体   English

从 DensePose Output 创建 UV 纹理 map

[英]Create UV Texture map from DensePose Output

I am trying to generate a single UV-texture map in the format of the SURREAL dataset.我正在尝试以 SURREAL 数据集的格式生成单个 UV 纹理 map。 There is a notebook in the original DensePose repository that discusses how to apply texture transfer using an image from SMPL: github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb原始 DensePose 存储库中有一个笔记本讨论如何使用来自 SMPL 的图像应用纹理传输:github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb

However, in this case I am trying to use the outputs we get from DensePose directly:但是,在这种情况下,我尝试直接使用从 DensePose 获得的输出:

In dump mode, I get the uv coordinates in data[0]['pred_densepose'][0].uv with dimensions: torch.Size([2, 1098, 529])在转储模式下,我得到 data[0]['pred_densepose'][0].uv 中的 uv 坐标,尺寸:torch.Size([2, 1098, 529])

I overlayed the output from running inference on an image with dp_u,dp_v visualization on a black background.我覆盖了 output 从在黑色背景上使用 dp_u,dp_v 可视化的图像上运行推理。 Here is the link to the image: https://densepose.s3.amazonaws.com/test1uv.0001.png这是图片的链接: https://densepose.s3.amazonaws.com/test1uv.0001.png

This is the command I used to get this inference: python3 apply_net.py show configs/densepose_rcnn_R_101_FPN_DL_WC2M_s1x.yaml model_final_de6e7a.pkl input.jpg dp_u,dp_v -v --output output.png This is the link to the original image: https://densepose.s3.amazonaws.com/02_1_front.jpg This is the command I used to get this inference: python3 apply_net.py show configs/densepose_rcnn_R_101_FPN_DL_WC2M_s1x.yaml model_final_de6e7a.pkl input.jpg dp_u,dp_v -v --output output.png This is the link to the original image: https:/ /densepose.s3.amazonaws.com/02_1_front.jpg

Using these components, I am trying to generate the 24 part uv texture map in the same format as SMPL: https://densepose.s3.amazonaws.com/extracted_smpl_texture_apprearance.png https://densepose.s3.amazonaws.com/texture_from_SURREAL.png Using these components, I am trying to generate the 24 part uv texture map in the same format as SMPL: https://densepose.s3.amazonaws.com/extracted_smpl_texture_apprearance.png https://densepose.s3.amazonaws.com/texture_from_SURREAL .png

It would be extremely helpful if someone can share how to solve this problem.如果有人可以分享如何解决这个问题,那将非常有帮助。 Please let me know if additional information is needed.如果需要更多信息,请告诉我。

I don't know if the problem still persists or you were able to find a solution.我不知道问题是否仍然存在,或者您是否能够找到解决方案。 In case that anyone else would challenge the same issues, here is my solution.如果其他人会挑战同样的问题,这是我的解决方案。 I put together several different codes and ideas from official github issue page for densepose ( https://github.com/facebookresearch/DensePose/issues/68 ).我将来自官方 github 问题页面的几个不同的代码和想法放在一起( https://github.com/facebookresearch/DensePose/issues/68 )。

I assume that we already have output of apply_net.py utility from github denspose repository.我假设我们已经拥有来自 github 密集存储库的apply_net.py实用程序的 output。 From your post it is a data output (one you were able to obtain data[0]['pred_densepose'][0].uv from).从您的帖子中,它是一个数据 output (您可以从中获取data[0]['pred_densepose'][0].uv )。

Let's do some coding:让我们做一些编码:

import copy
import cv2  
import matplotlib
import numpy as np

from matplotlib import pyplot as plt

matplotlib.use('TkAgg')

# I assume the data are stored in pickle, and you are able to read them 
results = data[0]
IMAGE_FILE = 'path/to/image.png'


def parse_iuv(result):
    i = result['pred_densepose'][0].labels.cpu().numpy().astype(float)
    uv = (result['pred_densepose'][0].uv.cpu().numpy() * 255.0).astype(float)
    iuv = np.stack((uv[1, :, :], uv[0, :, :], i))
    iuv = np.transpose(iuv, (1, 2, 0))
    return iuv


def parse_bbox(result):
    return result["pred_boxes_XYXY"][0].cpu().numpy()


def concat_textures(array):
    texture = []
    for i in range(4):
        tmp = array[6 * i]
        for j in range(6 * i + 1, 6 * i + 6):
            tmp = np.concatenate((tmp, array[j]), axis=1)
        texture = tmp if len(texture) == 0 else np.concatenate((texture, tmp), axis=0)
    return texture


def interpolate_tex(tex):
    # code is adopted from https://github.com/facebookresearch/DensePose/issues/68
    valid_mask = np.array((tex.sum(0) != 0) * 1, dtype='uint8')
    radius_increase = 10
    kernel = np.ones((radius_increase, radius_increase), np.uint8)
    dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
    region_to_fill = dilated_mask - valid_mask
    invalid_region = 1 - valid_mask
    actual_part_max = tex.max()
    actual_part_min = tex.min()
    actual_part_uint = np.array((tex - actual_part_min) / (actual_part_max - actual_part_min) * 255, dtype='uint8')
    actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1, 2, 0)), invalid_region, 1,
                               cv2.INPAINT_TELEA).transpose((2, 0, 1))
    actual_part = (actual_part_uint / 255.0) * (actual_part_max - actual_part_min) + actual_part_min
    # only use dilated part
    actual_part = actual_part * dilated_mask

    return actual_part


def get_texture(im, iuv, bbox, tex_part_size=200):
    # this part of code creates iuv image which corresponds
    # to the size of original image (iuv from densepose is placed
    # within pose bounding box).
    im = im.transpose(2, 1, 0) / 255
    image_w, image_h = im.shape[1], im.shape[2]
    bbox[2] = bbox[2] - bbox[0]
    bbox[3] = bbox[3] - bbox[1]
    x, y, w, h = [int(v) for v in bbox]
    bg = np.zeros((image_h, image_w, 3))
    bg[y:y + h, x:x + w, :] = iuv
    iuv = bg
    iuv = iuv.transpose((2, 1, 0))
    i, u, v = iuv[2], iuv[1], iuv[0]

    # following part of code iterate over parts and creates textures
    # of size `tex_part_size x tex_part_size`
    n_parts = 24
    texture = np.zeros((n_parts, 3, tex_part_size, tex_part_size))
    
    for part_id in range(1, n_parts + 1):
        generated = np.zeros((3, tex_part_size, tex_part_size))

        x, y = u[i == part_id], v[i == part_id]
        # transform uv coodrinates to current UV texture coordinates:
        tex_u_coo = (x * (tex_part_size - 1) / 255).astype(int)
        tex_v_coo = (y * (tex_part_size - 1) / 255).astype(int)
        
        # clipping due to issues encountered in denspose output;
        # for unknown reason, some `uv` coos are out of bound [0, 1]
        tex_u_coo = np.clip(tex_u_coo, 0, tex_part_size - 1)
        tex_v_coo = np.clip(tex_v_coo, 0, tex_part_size - 1)
        
        # write corresponding pixels from original image to UV texture
        # iterate in range(3) due to 3 chanels
        for channel in range(3):
            generated[channel][tex_v_coo, tex_u_coo] = im[channel][i == part_id]
        
        # this part is not crucial, but gives you better results 
        # (texture comes out more smooth)
        if np.sum(generated) > 0:
            generated = interpolate_tex(generated)

        # assign part to final texture carrier
        texture[part_id - 1] = generated[:, ::-1, :]
    
    # concatenate textures and create 2D plane (UV)
    tex_concat = np.zeros((24, tex_part_size, tex_part_size, 3))
    for i in range(texture.shape[0]):
        tex_concat[i] = texture[i].transpose(2, 1, 0)
    tex = concat_textures(tex_concat)

    return tex


iuv = parse_iuv(results)
bbox = parse_bbox(results)
image = cv2.imread(IMAGE_FILE)[:, :, ::-1]
uv_texture = get_texture(image, iuv, bbox)

# plot texture or do whatever you like
plt.imshow(uv_texture)
plt.show()

Enjoy享受

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM