简体   繁体   中英

Coreml model float input for a pytorch model

I have a pytorch model that takes 3 x width x height image as input with the pixel values normalized between 0-1

Eg, input in pytorch

img = io.imread(img_path)
input_img =  torch.from_numpy( np.transpose(img, (2,0,1)) ).contiguous().float()/255.0

I converted this model to coreml and exported an mlmodel which takes the input with correct dimensions

Image (Color width x height)

However, my predictions are incorrect as the model is expecting a float value between 0-1 and cvpixelbuffer is a int bwetween 0-255

I tried to normalize the values inside the model like so,

z = x.mul(1.0/255.0) # div op is not supported for export yet

However, when this op is done inside model at coreml level, int * float is casted as int and all values are essentially 0

Cast op is not supported for export eg, x = x.float()

How can I make sure my input is properly shaped for prediction? Essentially, I want to take the pixel rgb and float divide 255.0 and pass it to the model for inference?

I solved it using the coreml onnx coverter's preprocessing_args like so,

preprocessing_args= {'image_scale' : (1.0/255.0)}

Hope this helps someone

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM