I am using Swift's Vision Framework for Deep Learning and want to upload the input image to backend using REST API - for which I am converting my UIImage
to MultipartFormData
using jpegData()
and pngData()
function that swift natively offers.
I use session.sessionPreset =.vga640x480
to specify the image size in my app for processing.
I was seeing different size of image in backend - which I was able to confirm in the app because UIImage(imageData)
converted from the image is of different size.
This is how I convert image to multipartData
-
let multipartData = MultipartFormData()
if let imageData = self.image?.jpegData(compressionQuality: 1.0) {
multipartData.append(imageData, withName: "image", fileName: "image.jpeg", mimeType: "image/jpeg")
}
This is what I see in Xcode debugger -
This does not work, ending up with a Data
representation of the image with an incorrect scale
:
let ciImage = CIImage(cvImageBuffer: pixelBuffer) // 640×480
let image = UIImage(ciImage: ciImage) // says it is 640×480 with scale of 1
guard let data = image.pngData() else { ... } // but if you extract `Data` and then recreate image from that, the size will be off by a multiple of your device’s scale
However, if you create it via a CGImage
, you will get the right result:
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let ciContext = CIContext()
guard let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) else { return }
let image = UIImage(cgImage: cgImage)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.