简体   繁体   English

在Firebase图像标签中如何将TFLite模型与uint8的输入和输出数据类型一起使用

[英]How to use TFLite model with input and output datatype of uint8 in Firebase image labeling

I created an image classifier model using AutoML in firebase with the following input and output 我在Firebase中使用AutoML使用以下输入和输出创建了图像分类器模型

[  1 224 224   3]
<class 'numpy.uint8'>
[ 1 11]
<class 'numpy.uint8'>

But FirebaseModelDataType does not have uint8 data type. 但是FirebaseModelDataType没有uint8数据类型。 What should i do? 我该怎么办? it only supports INT32, FLOAT32, BYTE and LONG 它仅支持INT32,FLOAT32,BYTE和LONG

interpreter = FirebaseModelInterpreter.getInstance(options);
        inputOutputOptions = new FirebaseModelInputOutputOptions.Builder()
                .setInputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 224, 224, 3})
                .setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 11})
                .build();

this code wont run because the model input and output is uint8 此代码将不会运行,因为模型的输入和输出是uint8

I finally made it work It turns out that the way AutoML model is used differently from custom models here is how i used AutoML model 我终于使它工作了原来,AutoML模型与自定义模型的使用方式不同,这就是我使用AutoML模型的方式

private void startLabel() {
    FirebaseLocalModel localModel = new FirebaseLocalModel.Builder("my_local_model")
            .setAssetFilePath("manifest.json")
            .build();

    FirebaseModelManager.getInstance().registerLocalModel(localModel);
    timer = new Timer();
    timer.schedule(new TimerTask() {
        @Override
        public void run() {
            FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(textureView.getBitmap());
            FirebaseVisionOnDeviceAutoMLImageLabelerOptions labelerOptions = new FirebaseVisionOnDeviceAutoMLImageLabelerOptions.Builder()
                    .setLocalModelName("my_local_model")
                    .setConfidenceThreshold(0.55f)
                    .build();
            try {

                FirebaseVisionImageLabeler labeler = FirebaseVision.getInstance().getOnDeviceAutoMLImageLabeler(labelerOptions);
                labeler.processImage(image)
                        .addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionImageLabel>>() {
                            @Override
                            public void onSuccess(List<FirebaseVisionImageLabel> firebaseVisionImageLabels) {
                                if(!firebaseVisionImageLabels.isEmpty()){
                                    MoneyReader.this.result.setText(firebaseVisionImageLabels.get(0).getText());
                                    if(isTTSReady){
                                        tts.speak(firebaseVisionImageLabels.get(0).getText(), TextToSpeech.QUEUE_ADD, null, "DEFAULT");
                                    }
                                }else{
                                    status.setText("Nothing Recognized");
                                }
                            }
                        })
                        .addOnFailureListener(new OnFailureListener() {
                            @Override
                            public void onFailure(@NonNull Exception e) {
                                Toast.makeText(MoneyReader.this, e.getMessage(), Toast.LENGTH_SHORT).show();
                            }
                        });
            } catch (FirebaseMLException e) {
                Toast.makeText(MoneyReader.this, e.getMessage(), Toast.LENGTH_SHORT).show();
            }
        }
    }, 0, 2000);
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM