简体   繁体   English

将Photoshop sRGB复制到LAB转换

[英]Replicate Photoshop sRGB to LAB Conversion

The task I want to achieve is to replicate Photoshop RGB to LAB conversion. 我想要实现的任务是将Photoshop RGB复制到LAB转换。
For simplicity, I will describe what I did to extract only the L Channel. 为简单起见,我将描述我只提取L通道所做的工作。

Extracting Photoshop's L Channel 提取Photoshop的L通道

Here is RGB Image which includes all RGB colors (Please click and download): 这是RGB Image,包括所有RGB颜色 (请点击下载):

RGB颜色图像

In order to extract Photoshop's LAB what I did is the following: 为了提取Photoshop的LAB,我所做的是以下内容:

  1. Loaded the image into Photoshop. 将图像加载到Photoshop中。
  2. Set Mode to LAB. 将模式设置为LAB。
  3. Selected the L Channel in the Channel Panel. 在频道面板中选择了L频道。
  4. Set Mode to Grayscale. 将模式设置为灰度。
  5. Set mode to RGB. 将模式设置为RGB。
  6. Saved as PNG. 保存为PNG。

This is the L Channel of Photoshop (This is exactly what seen on screen when L Channel is selected in LAB Mode): 这是Photoshop的L通道(这正是在LAB模式下选择L通道时在屏幕上看到的):

Photoshop的L频道图像

sRGB to LAB Conversion sRGB到LAB转换

My main reference is Bruce Lindbloom great site . 我的主要参考是Bruce Lindbloom伟大的网站
Also known is that Photoshop is using D50 White Point in its LAB Mode (See also Wikipedia's LAB Color Space Page ). 另外一点是Photoshop在其LAB模式下使用D50 White Point (另请参阅Wikipedia的LAB Color Space Page )。

Assuming the RGB image is in sRGB format the conversion is given by: 假设RGB图像采用sRGB格式,则转换由下式给出:

sRGB -> XYZ (White Point D65) -> XYZ (White Point D50) -> LAB

Assuming data is in Float within the [0, 1] range the stages are given by: 假设数据在[0,1]范围内的Float中,则阶段由下式给出:

  1. Transform sRGB into XYZ . 将sRGB转换为XYZ
    The conversion Matrix is given by RGB -> XYZ Matrix (See sRGB D65). 转换矩阵由RGB - > XYZ矩阵给出 (参见sRGB D65)。
  2. Converting from XYZ D65 to XYZ D50 从XYZ D65转换为XYZ D50
    The conversion is done using Chromatic Adaptation Matrix . 使用Chromatic Adaptation Matrix完成转换。 Since the previous step and this are Matrix Multiplication they can be combined into one Matrix which goes from sRGB -> XYZ D50 (See the bottom of RGB to XYZ Matrix ). 由于上一步和这是矩阵乘法,它们可以组合成一个矩阵,它来自sRGB - > XYZ D50(参见RGB底部到XYZ矩阵 )。 Note that Photoshop uses Bradford Adaptation Method. 请注意,Photoshop使用Bradford Adaptation Method。
  3. Convert from XYZ D50 to LAB 从XYZ D50转换为LAB
    The conversion is done using the XYZ to LAB Steps . 使用XYZ到LAB步骤完成转换。

MATLAB Code MATLAB代码

Since, for start, I'm only after the L Channel things are a bit simpler. 从一开始,我只是在L频道之后,事情变得更简单了。 The images are loaded into MATLAB and converted into Float [0, 1] range. 图像被加载到MATLAB中并转换为Float [0,1]范围。

This is the code: 这是代码:

%% Setting Enviorment Parameters

INPUT_IMAGE_RGB             = 'RgbColors.png';
INPUT_IMAGE_L_PHOTOSHOP     = 'RgbColorsL.png';


%% Loading Data

mImageRgb   = im2double(imread(INPUT_IMAGE_RGB));
mImageLPhotoshop     = im2double(imread(INPUT_IMAGE_L_PHOTOSHOP));
mImageLPhotoshop     = mImageLPhotoshop(:, :, 1); %<! All channels are identical


%% Convert to L Channel

mImageLMatlab = ConvertRgbToL(mImageRgb, 1);


%% Display Results
figure();
imshow(mImageLPhotoshop);
title('L Channel - Photoshop');

figure();
imshow(mImageLMatlab);
title('L Channel - MATLAB');

Where the function ConvertRgbToL() is given by: 函数ConvertRgbToL()由下式给出:

function [ mLChannel ] = ConvertRgbToL( mRgbImage, sRgbMode )

OFF = 0;
ON  = 1;

RED_CHANNEL_IDX     = 1;
GREEN_CHANNEL_IDX   = 2;
BLUE_CHANNEL_IDX    = 3;

RGB_TO_Y_MAT = [0.2225045, 0.7168786, 0.0606169]; %<! D50

Y_CHANNEL_THR = 0.008856;

% sRGB Compensation
if(sRgbMode == ON)
    vLinIdx = mRgbImage < 0.04045;

    mRgbImage(vLinIdx)  = mRgbImage(vLinIdx) ./ 12.92;
    mRgbImage(~vLinIdx) = ((mRgbImage(~vLinIdx) + 0.055) ./ 1.055) .^ 2.4;
end

% RGB to XYZ (D50)
mY = (RGB_TO_Y_MAT(1) .* mRgbImage(:, :, RED_CHANNEL_IDX)) + (RGB_TO_Y_MAT(2) .* mRgbImage(:, :, GREEN_CHANNEL_IDX)) + (RGB_TO_Y_MAT(3) .* mRgbImage(:, :, BLUE_CHANNEL_IDX));

vYThrIdx = mY > Y_CHANNEL_THR;

mY3 = mY .^ (1 / 3);

mLChannel = ((vYThrIdx .* (116 * mY3 - 16.0)) + ((~vYThrIdx) .* (903.3 * mY))) ./ 100;


end

As one could see the results are different. 可以看出结果是不同的。
Photoshop is much darker for most colors. Photoshop对于大多数颜色来说要暗得多。

Anyone knows how to replicate Photoshop's LAB conversion? 有谁知道如何复制Photoshop的LAB转换?
Anyone can spot issue in this code? 任何人都可以在此代码中发现问题?

Thank You. 谢谢。

Latest answer (we know that it is wrong now, waiting for a proper answer) 最新答案 (我们知道现在错了,等待正确答案)

Photoshop is a very old and messy software. Photoshop是一个非常古老而凌乱的软件。 There's no clear documentation as to why this or that happens to the pixel values when you are performing conversions from a mode to another. 当您执行从模式到另一个模式的转换时,没有明确的文档说明为什么像素值会发生这种或那种情况。

Your problem happens because when you are converting the selected L* channel to Greyscale in Adobe Photoshop, there's a change in gamma. 出现问题是因为在Adobe Photoshop中将选定的L *通道转换为灰度时,伽马值会发生变化。 Natively, the conversion uses a gamma of 1.74 for single channel to greyscale conversion. 本地,转换使用1.74的伽玛进行单通道到灰度转换。 Don't ask me why, I would guess this is related to old laser printers (?). 不要问我为什么,我猜这与旧的激光打印机(?)有关。

Anyway, this is the best way I found to do it: 无论如何,这是我发现的最佳方式:

Open your file, turn it to LAB mode, select the L channel only 打开文件,将其转为LAB模式,仅选择L通道

Then go to: 然后去:

Edit > Convert to profile 编辑>转换为个人资料

You will select "custom gamma" and enter the value 2.0 (don't ask me why 2.0 works better, I have no idea what's in the mind of Adobe's software makers...) This operation will turn your picture into a greyscale one with only one channel 您将选择“自定义伽玛”并输入值2.0(不要问我为什么2.0更好用,我不知道Adobe软件制造商的想法是什么......)此操作会将您的图片转换为灰度图片只有一个频道

Then you can convert it to RGB mode. 然后您可以将其转换为RGB模式。

If you compare the result with your result, you will see differences up to 4 dot something % - all located in the darkest areas. 如果您将结果与结果进行比较,您会发现差异最多为4个点% - 全部位于最暗的区域。

I suspect this is because the gamma curve application does not appy to LAB mode in the dark values (Cf. as you know, all XYZ values below 0.008856 are linear in LAB) 我怀疑这是因为伽马曲线应用程序不适用于黑暗值的LAB模式(如您所知,所有低于0.008856的XYZ值在LAB中都是线性的)

CONCLUSION: 结论:

As far as I know, there is no proper implemented way in Adobe Photoshop to extract the L channel from LAB mode to grey mode! 据我所知,在Adobe Photoshop中没有适当的实现方式将L通道从LAB模式提取到灰色模式!

Previous answer 以前的答案

this is the result I get with my own method: 这是我用自己的方法得到的结果:

RGB2LAB

It seems to be exactly the same result as the Adobe Photoshop one. 它似乎与Adobe Photoshop的结果完全相同。

I am not sure what went wrong on your side since the steps that you are describing are exactly the same ones that I followed and that I would have advised you to follow. 我不确定你身边出了什么问题,因为你所描述的步骤与我所遵循的步骤完全相同,我建议你遵循。 I don't have Matlab so I used python: 我没有Matlab所以我用python:

import cv2, Syn

# your file
fn = "EASA2.png"

#reading the file
im = cv2.imread(fn,-1)

#openCV works in BGR, i'm switching to RGB
im = im[:,:,::-1]

#conversion to XYZ
XYZ = Syn.sRGB2XYZ(im)

#white points D65 and D50
WP_D65 = Syn.Yxy2XYZ((100,0.31271, 0.32902))
WP_D50 = Syn.Yxy2XYZ((100,0.34567, 0.35850))

#bradford
XYZ2 = Syn.bradford_adaptation(XYZ, WP_D65, WP_D50) 

#conversion to L*a*b*
LAB = Syn.XYZ2Lab(XYZ2, WP_D50)

#picking the L channel only
L = LAB[:,:,0] /100. * 255.

#image output
cv2.imwrite("result.png", L)

the Syn library is my own stuff, here are the functions (sorry for the mess): Syn库是我自己的东西,这里是函数(抱歉这个烂摊子):

def sRGB2XYZ(sRGB):

    sRGB = np.array(sRGB)
    aShape = np.array([1,1,1]).shape
    anotherShape = np.array([[1,1,1],[1,1,1]]).shape
    origShape = sRGB.shape

    if sRGB.shape == aShape:
        sRGB = np.reshape(sRGB, (1,1,3))

    elif len(sRGB.shape) == len(anotherShape):
        h,d = sRGB.shape
        sRGB = np.reshape(sRGB, (1,h,d))

    w,h,d = sRGB.shape

    sRGB = np.reshape(sRGB, (w*h,d)).astype("float") / 255.

    m1 = sRGB[:,0] > 0.04045
    m1b = sRGB[:,0] <= 0.04045
    m2 = sRGB[:,1] > 0.04045
    m2b = sRGB[:,1] <= 0.04045
    m3 = sRGB[:,2] > 0.04045
    m3b = sRGB[:,2] <= 0.04045

    sRGB[:,0][m1] = ((sRGB[:,0][m1] + 0.055 ) / 1.055 ) ** 2.4
    sRGB[:,0][m1b] = sRGB[:,0][m1b] / 12.92

    sRGB[:,1][m2] = ((sRGB[:,1][m2] + 0.055 ) / 1.055 ) ** 2.4
    sRGB[:,1][m2b] = sRGB[:,1][m2b] / 12.92

    sRGB[:,2][m3] = ((sRGB[:,2][m3] + 0.055 ) / 1.055 ) ** 2.4
    sRGB[:,2][m3b] = sRGB[:,2][m3b] / 12.92

    sRGB *= 100. 

    X = sRGB[:,0] * 0.4124 + sRGB[:,1] * 0.3576 + sRGB[:,2] * 0.1805
    Y = sRGB[:,0] * 0.2126 + sRGB[:,1] * 0.7152 + sRGB[:,2] * 0.0722
    Z = sRGB[:,0] * 0.0193 + sRGB[:,1] * 0.1192 + sRGB[:,2] * 0.9505

    XYZ = np.zeros_like(sRGB)

    XYZ[:,0] = X
    XYZ[:,1] = Y
    XYZ[:,2] = Z

    XYZ = np.reshape(XYZ, origShape)

    return XYZ

def Yxy2XYZ(Yxy):

    Yxy = np.array(Yxy)
    aShape = np.array([1,1,1]).shape
    anotherShape = np.array([[1,1,1],[1,1,1]]).shape
    origShape = Yxy.shape

    if Yxy.shape == aShape:
        Yxy = np.reshape(Yxy, (1,1,3))

    elif len(Yxy.shape) == len(anotherShape):
        h,d = Yxy.shape
        Yxy = np.reshape(Yxy, (1,h,d))

    w,h,d = Yxy.shape

    Yxy = np.reshape(Yxy, (w*h,d)).astype("float")

    XYZ = np.zeros_like(Yxy)

    XYZ[:,0] = Yxy[:,1] * ( Yxy[:,0] / Yxy[:,2] )
    XYZ[:,1] = Yxy[:,0]
    XYZ[:,2] = ( 1 - Yxy[:,1] - Yxy[:,2] ) * ( Yxy[:,0] / Yxy[:,2] )

    return np.reshape(XYZ, origShape)

def bradford_adaptation(XYZ, Neutral_source, Neutral_destination):
    """should be checked if it works properly, but it seems OK"""

    XYZ = np.array(XYZ)
    ashape = np.array([1,1,1]).shape
    siVal = False

    if XYZ.shape == ashape:


        XYZ = np.reshape(XYZ, (1,1,3))
        siVal = True


    bradford = np.array(((0.8951000, 0.2664000, -0.1614000),
                          (-0.750200, 1.7135000,  0.0367000),
                          (0.0389000, -0.068500,  1.0296000)))

    inv_bradford = np.array(((0.9869929, -0.1470543, 0.1599627),
                              (0.4323053,  0.5183603, 0.0492912),
                              (-.0085287,  0.0400428, 0.9684867)))

    Xs,Ys,Zs = Neutral_source
    s = np.array(((Xs),
                   (Ys),
                   (Zs)))

    Xd,Yd,Zd = Neutral_destination
    d = np.array(((Xd),
                   (Yd),
                   (Zd)))


    source = np.dot(bradford, s)
    Us,Vs,Ws = source[0], source[1], source[2]

    destination = np.dot(bradford, d)
    Ud,Vd,Wd = destination[0], destination[1], destination[2]

    transformation = np.array(((Ud/Us, 0, 0),
                                (0, Vd/Vs, 0),
                                (0, 0, Wd/Ws)))

    M = np.mat(inv_bradford)*np.mat(transformation)*np.mat(bradford)

    w,h,d = XYZ.shape
    result = np.dot(M,np.rot90(np.reshape(XYZ, (w*h,d)),-1))
    result = np.rot90(result, 1)
    result = np.reshape(np.array(result), (w,h,d))

    if siVal == False:
        return result
    else:
        return result[0,0]

def XYZ2Lab(XYZ, neutral):
    """transforms XYZ to CIE Lab
    Neutral should be normalized to Y = 100"""

    XYZ = np.array(XYZ)
    aShape = np.array([1,1,1]).shape
    anotherShape = np.array([[1,1,1],[1,1,1]]).shape
    origShape = XYZ.shape

    if XYZ.shape == aShape:
        XYZ = np.reshape(XYZ, (1,1,3))

    elif len(XYZ.shape) == len(anotherShape):
        h,d = XYZ.shape
        XYZ = np.reshape(XYZ, (1,h,d))

    N_x, N_y, N_z = neutral
    w,h,d = XYZ.shape

    XYZ = np.reshape(XYZ, (w*h,d)).astype("float")

    XYZ[:,0] = XYZ[:,0]/N_x
    XYZ[:,1] = XYZ[:,1]/N_y
    XYZ[:,2] = XYZ[:,2]/N_z

    m1 = XYZ[:,0] > 0.008856
    m1b = XYZ[:,0] <= 0.008856
    m2 = XYZ[:,1] > 0.008856 
    m2b = XYZ[:,1] <= 0.008856
    m3 = XYZ[:,2] > 0.008856
    m3b = XYZ[:,2] <= 0.008856

    XYZ[:,0][m1] = XYZ[:,0][XYZ[:,0] > 0.008856] ** (1/3.0)
    XYZ[:,0][m1b] = ( 7.787 * XYZ[:,0][m1b] ) + ( 16 / 116.0 )

    XYZ[:,1][m2] = XYZ[:,1][XYZ[:,1] > 0.008856] ** (1/3.0)
    XYZ[:,1][m2b] = ( 7.787 * XYZ[:,1][m2b] ) + ( 16 / 116.0 )

    XYZ[:,2][m3] = XYZ[:,2][XYZ[:,2] > 0.008856] ** (1/3.0)
    XYZ[:,2][m3b] = ( 7.787 * XYZ[:,2][m3b] ) + ( 16 / 116.0 )

    Lab = np.zeros_like(XYZ)

    Lab[:,0] = (116. * XYZ[:,1] ) - 16.
    Lab[:,1] = 500. * ( XYZ[:,0] - XYZ[:,1] )
    Lab[:,2] = 200. * ( XYZ[:,1] - XYZ[:,2] )

    return np.reshape(Lab, origShape)

All conversions between colour spaces in Photoshop are through CMM, which is sufficiently fast on circa 2000 hardware, and not quite accurate. Photoshop中色彩空间之间的所有转换都是通过CMM进行的,CMM在大约2000硬件上足够快,并且不太准确。 You can have a lot of 4-bit errors and some 7-bit errors with Adobe CMM if you check "round robin" - RGB -> Lab -> RGB. 如果选中“循环法” - RGB - > Lab - > RGB,则可能会有很多4位错误和Adobe CMM的7位错误。 That may cause posterisation. 这可能导致分色。 I always base my conversions on formulae, not on CMMs. 我总是将我的转换基于公式,而不是基于CMM。 However the average deltaE of the error with Adobe CMM and Argyll CMM is quite acceptable. 但是,使用Adobe CMM和Argyll CMM的错误的平均deltaE是完全可以接受的。

Lab conversions are quite similar to RGB, only the non-linearity (gamma) is applied at the first step; 实验室转换非常类似于RGB,只在第一步应用了非线性(gamma); something like this: 这样的事情:

  1. normalize XYZ to white point 将XYZ标准化为白点

  2. bring the result to gamma 3 (keeping shadow portion linear, depends on implementation) 将结果带到gamma 3(保持阴影部分线性,取决于实现)

  3. multiply the result by [0 116 0 -16; 将结果乘以[0 116 0 -16; 500 -500 0 0; 500 -500 0 0; 0 200 -200 0]' 0 200 -200 0]'

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM