简体   繁体   中英

How do I use trained .tfilte model, to create mask on live-camera output?

I've started learning ML on iOS with Swift. Now I know a little bit about neural networks. Here I have .tflite model well trained to recognize nails because the effect is like this:

在此处输入图像描述

Now I need to create a mask on live-camera output when

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {}

is called.

Currently when I put mask on live-camera there is an output like this:

在此处输入图像描述

What may be wrong with my model which interprets the output?

Here you can see my ScannerViewController used to preview a mask, and DeepLabModel.

EDIT 1:

If you have any other model, that can replace my DeepLabModel I also will be happy with this. Here is something wrong, and I don't know what.

EDIT 2:

I also think about possibility that the pod used in DeepLabModel is wrong:

pod 'TensorFlowLiteGpuExperimental'

After analyzing your.tflite file that is hosted at your link above I can say that is well structured, it is giving 2 labels as you desire BUT it is not fully trained. I give you 3 pictures of the results after inference on android phone.

图片1 图二 图三

So there is nothing wrong with your code... .tflite file is not producing good results!

My advice is to continue train it with more pictures of hands and nails. I would recommend over 300 pictures with masks of different hands and nails and about 30.000 epochs using Deeplab

If you need a tool to help u with creating masks use this

You can always search google or Kaggle for datasets to enhance the number of images you are using

If you need more info or anything else you can tag me!

Happy coding!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM