简体   繁体   English

Python测试结果与caffe测试结果不同

[英]Python test result is not same to caffe test result

My question is about caffe test result. 我的问题是关于咖啡测试结果的。 Python script result is not equal to caffe test result. Python脚本结果不等于Caffe测试结果。 I used Alexnet and my test accuracy is 0,9033. 我使用了Alexnet,测试精度为0,9033。

Caffe test accuracy : 0.9033 Caffe测试准确度 :0.9033

Python accuracy: 0.8785 Python准确度: 0.8785

I used 40000 images to test. 我使用了40000张图像进行测试。 The number of misclassify images should be 3868. But the number of misclassify images in my python result is 4859. What is the problem? 错误分类图像的数量应为3868。但是在我的python结果中,错误分类图像的数量为4859。这是什么问题?

Thank you. 谢谢。

Here is my caffe test command: 这是我的caffe测试命令:

…/build/tools/caffe test --model …/my_deploy.prototxt --weights …/alex_24_11__iter_200000.caffemodel -gpu 0 -iterations 800

After that, I found and try a python script with my test data but I don't get the same result. 之后,我发现并尝试使用测试数据运行python脚本,但未得到相同结果。 I used this script on another dataset before and I got the same accuracy with my caffe test but I didn't use mean file neither during train nor during test. 之前,我在另一个数据集上使用了此脚本,但是在进行caffe测试时,我获得了相同的准确性,但是无论是在训练中还是测试期间,我都没有使用均值文件。 But now I used mean file both train and test. 但是现在我使用均值文件进行训练和测试。 May be there is a problem in mean file but I used everything that I found from the tutorials. 平均值文件中可能存在问题,但是我使用了从教程中发现的所有内容。

  1. I created lmdb. 我创建了lmdb。
  2. I used compute_image_mean to create mean file from lmdb. 我使用compute_image_mean从lmdb创建均值文件。 The size of images in lmdb is 256x256. lmdb中的图像大小为256x256。
  3. I used 227x227 images in alexnet. 我在alexnet中使用了227x227的图片。

Python script: Python脚本:

caffe.set_mode_gpu()

model_def = '…/my_deploy.prototxt'

model_weights = '… /alex_24_11__iter_200000.caffemodel'

net = caffe.Net(model_def,  model_weights, caffe.TEST)   

blob = caffe.proto.caffe_pb2.BlobProto()

data = open( '.../image_mean.binaryproto' , 'rb' ).read()

blob.ParseFromString(data)

arr = np.array( caffe.io.blobproto_to_array(blob) )

out = arr[0]

np.save( '.../imageMean.npy' , out )

mu = np.load('…/imageMean.npy')

mu = mu.mean(1).mean(1)

transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})

transformer.set_transpose('data', (2,0,1))  

transformer.set_mean('data', mu)         

transformer.set_raw_scale('data', 255)

transformer.set_channel_swap('data', (2,1,0))  

net.blobs['data'].reshape(1,  3, 227, 227)


f = open('…/val.txt', 'r')

f2 = open('…/result.txt', 'a')

for x in range(0,40000):

   a=f.readline()

   a=a.split(' ')

   image = caffe.io.load_image('… /'+a[0])

   transformed_image = transformer.preprocess('data', image)

   net.blobs['data'].data[...] = transformed_image

   output = net.forward()

   output_prob = output['prob'][0]  

   f2.write(str(a[0]))

   f2.write(str(' '))

   f2.write(str(output_prob.argmax()))

   f2.write('\n')

First layer of my deploy.prototxt 我的deploy.prototxt的第一层

layer {
  name: "input"
  type: "Input"
  top: "data"
  input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } }
}

Last layer of my deploy.prototxt 我的deploy.prototxt的最后一层

layer {
  name: "prob"
  type: "Softmax"
  bottom: "fc8-16"
  top: "prob"
}

The other layers are equal to train_val.prototxt. 其他层等于train_val.prototxt。

Check that your preprocessing is the same when creating the LMDB and processing the test data. 在创建LMDB和处理测试数据时,请检查预处理是否相同。

For example, if you use : 例如,如果您使用:

transformer.set_channel_swap('data', (2,1,0)) 

You should ensure that your LMDB also swapped these channels (I assume this is a RGB to BGR conversion). 您应该确保LMDB也交换了这些通道(我假设这是RGB到BGR的转换)。

Especially, you say that you used the mean image during training. 特别是,您说您在训练期间使用了平均图像 However, in your Transformer , you are computing and substracting the mean pixel . 但是,在您的Transformer ,您正在计算和减去平均像素 This could explain the small difference between your two accuracies. 这可以解释您的两个精度之间的微小差异。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM