简体   繁体   中英

Artifacts on Image from 16bpp to BMP

I have that stack of images which I took with a CCD camera, on the microscope I develop.

I'm using a CCD camera of Lumenera which generates a 12bpp image, stretched, by

<<4 (shift 4 zeros on LSB) to 16bpp. I attach the original image - which looks sensible: 原始图片jpg

here's another one: 在此处输入图片说明 and a montage of many of them, where there are vertical lines: 在此处输入图片说明

The only "Processing" I THINK I DO on the montage is scaling the values between minValue and maxValue, and compressing the value into a byte, in order to use it in a 8bpp bitmap, using the following line:

public static void ParseImage(ushort[]image,ushort minVal,ushort maxVal, int nx,int ny)
{
  ...........
  byte val = (byte)((image[i]-minval)/(maxVal-minVal)*byte.maxValue);

I'm aware of that: artifacts in processed images, which generaly makes me put the finger on the camera, but the camera manufacturer said it might be related to shifts between 12 bit to 16 bits... Can anybody understand how, and if so - how to fix? Or do I have some bug one may identify by these artifacts ?

Any help (hints, more homeWork for me..) will be gratefully accepted! Thanks

Not seeing the raw data I can only work from the images you show. But I doubt your image creation code is wrong at all.

As far as I can see you have two issues, both of which stem from the original data.

  1. The overall brightness of the images in not the same.
  2. There is a drift in brightness across the images, the top right side being a little darker, like a shadow or a one-sided vignette.

The first issue is rather easy to overcome; the 2nd not so.

Here is a simple, in fact really simplistic approach of bringing rather uniform images to a common brightness level :

  1. Prepare the images; as they are rather grainy we create a smaller version from each to do the stats.
  2. Calulate the overall brightness. We don't need to look into each pixel; a reasonably large number will do. I use 1000 randomly chosen pixels.
  3. Adapt the images to have a uniform brightness. This is best and fastest done with a ColorMatrix .
  4. Stitch the images together using the two color matrices.

I did this for the two larger ones. As you can see the left portion is almost level, but to the right the shadow on the top is darker than at the bottom and the stitching artifacts show strongly.

If you can't resolve this at the camera level or maybe by more careful lightning, the best solution imo would be to create a norm image for correction purposes; it would be totally blank, ie without the motive except for the shadow. Then you can subtract this from each real images in some way, ie maybe after calculating some factor to allow differences in gamma.

Here is the result:

在此处输入图片说明

..and here is the code:

private void button1_Click(object sender, EventArgs e)
{
    Bitmap bmp1 = (Bitmap) Image.FromFile(path1); 
    Bitmap bmp2 = (Bitmap) Image.FromFile(path2);
    int factor = 8;   // size reduction for smooth measurement data
    Size sz = bmp1.Size;
    Size szs = new Size(sz.Width / factor, sz.Height / factor);
    Bitmap bmp1s = new Bitmap(bmp1, szs);
    Bitmap bmp2s = new Bitmap(bmp2, szs);

    float avgBrightnes1 = getAvgBrightness(bmp1, 1000);
    float avgBrightnes2 = getAvgBrightness(bmp2, 1000);

    float avgB12 = (avgBrightnes1 + avgBrightnes2) / 2f;
    float deltaB1 = avgB12 - avgBrightnes1;
    float deltaB2 = avgB12 - avgBrightnes2;

    Console.WriteLine("  B1 = " + avgBrightnes1.ToString("0.000") 
                    +   "B2 = " + avgBrightnes2.ToString("0.000"));

    pictureBox1.Image = (Bitmap)bmp1;
    pictureBox2.Image = (Bitmap)bmp2;

    Rectangle r1 = new Rectangle(0, 0, sz.Width, sz.Height);
    Rectangle r2 = new Rectangle(0, sz.Height, sz.Width, sz.Height);

    Bitmap bmp12 = new Bitmap(sz.Width, sz.Height * 2);

    ColorMatrix M1 = new ColorMatrix();
    M1.Matrix40 =  M1.Matrix41 = M1.Matrix42 = deltaB1;
    ColorMatrix M2 = new ColorMatrix();
    M2.Matrix40 =  M2.Matrix41 = M2.Matrix42 = deltaB2;
    ImageAttributes iAtt = new ImageAttributes();

    using (Graphics g = Graphics.FromImage(bmp12))
    {
        iAtt.SetColorMatrix(M1, ColorMatrixFlag.Default, ColorAdjustType.Bitmap);
        g.DrawImage(bmp1,r1, 0, 0, sz.Width, sz.Height, GraphicsUnit.Pixel, iAtt);

        iAtt.ClearColorMatrix();
        iAtt.SetColorMatrix(M2, ColorMatrixFlag.Default, ColorAdjustType.Bitmap);
        g.DrawImage(bmp2,r2, 0, 0 ,sz.Width, sz.Height, GraphicsUnit.Pixel, iAtt);
    }
    pictureBox3.Image = (Bitmap)bmp12;
}

float getAvgBrightness(Bitmap bmp, int count)
{
    Random rnd = new Random(0);
    float b = 0f;
    for (int i = 0; i < count; i++)
    {
        b += bmp.GetPixel(rnd.Next(bmp.Width), rnd.Next(bmp.Height)).GetBrightness();
    }
    return b/count;
}

The getAvgBrightness is very simple. One could use more advanced stats to weigh the most common or most median brightness levels stronger; I think the MSChart control has built-in statistical functions one could use. But from what I see the real issues are with the gamma and vignette shadow in the camera images..

Do note: Even though the images are rather simple, stitching is not a simple task at all. The panorama stitching software out there is highly specialized! And in your images the artifacts are extra simple to detect because there is not much else for our eyes to see and discover..

PS: Looking at the 3rd image one can't help but wonder why the images are getting progressivley darker? Can't help here but I suggest investigating this first.. - Also: Some pattern seems to be repeating, others not.. - As usual it will be best to improve quality as early in the processing chain as possible.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM