繁体   English   中英

尝试检测自定义 tflite model 上的对象时出现 PlatformException

[英]PlatformException when trying to detect objects on a custom tflite model

我使用 Cloud AutoML 训练了一个自定义 model,它假设可以检测一张纸上的标记。 我将数据集导出为 TFLite 文件,并将其托管在 firebase 上。

我设法下载了文件并很好地启动了 objectDetector。 但在处理输入图像时出现错误。

这是我的代码:

初始化cubit中的检测器

  initialiseDetector({double confidenceThreshold = 0.5, int maximumLabelsPerObject = 10}) async {
    emit(ShoddyLoading(state.mainShoddyState.copyWith(message: 'Loading object detector')));
    try {
      ObjectDetector objectDetector = await ShoddyHelper.initialiseDetector(
        processingFromDownloadedFile: true,
        modelFile: state.mainShoddyState.modelFile,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );
      emit(ShoddyModelLoaded(state.mainShoddyState.copyWith(objectDetector: objectDetector, message: 'Ready to start processing images')));
    } catch (error) {
      emit(ShoddyError(state.mainShoddyState.copyWith(message: error.toString())));
    }
  }

用于下载或使用 model 文件的帮助程序/实用程序文件

  static Future<ObjectDetector> initialiseDetector({File? modelFile, bool processingFromDownloadedFile = true, required double confidenceThreshold, required int maximumLabelsPerObject}) async {
    if (processingFromDownloadedFile) {
      if (modelFile != null) {
        return await initializeLocalDetector(modelFile, confidenceThreshold, maximumLabelsPerObject);
      } else {
        File modelFile = await loadModelFileFromFirebase();
        return await initializeLocalDetector(modelFile, confidenceThreshold, maximumLabelsPerObject);
      }
    } else {
      return await initializeFirebaseDetector(confidenceThreshold, maximumLabelsPerObject);
    }
  }

// Download the model file from firebase first
  static Future<File> loadModelFileFromFirebase(String modelName) async {
    try {
      FirebaseModelDownloader downloader = FirebaseModelDownloader.instance;

      List<FirebaseCustomModel> models = await downloader.listDownloadedModels();
      for (FirebaseCustomModel model in models) {
        print('Name: ${model.name}');
      }

      FirebaseModelDownloadConditions conditions = FirebaseModelDownloadConditions(
        iosAllowsCellularAccess: true,
        iosAllowsBackgroundDownloading: false,
        androidChargingRequired: false,
        androidWifiRequired: false,
        androidDeviceIdleRequired: false,
      );

      FirebaseCustomModel model = await downloader.getModel(
        modelName,
        FirebaseModelDownloadType.latestModel,
        conditions,
      );

      File modelFile = model.file;

      return modelFile;
    } catch (exception) {
      print('Failed on loading your model from Firebase: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

  // Use a file downloaded from firebase
  static Future<ObjectDetector> initializeLocalDetector(File modelFile, double confidenceThreshold, int maximumLabelsPerObject) async {
    try {
      final options = LocalObjectDetectorOptions(
        mode: DetectionMode.single,
        modelPath: modelFile.path,
        classifyObjects: true,
        multipleObjects: true,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );

      return ObjectDetector(options: options);
    } catch (exception) {
      print('Failed on loading your model to the TFLite interpreter: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

  // Use the model file directly from firebase
  static Future<ObjectDetector> initializeFirebaseDetector(String modelName, double confidenceThreshold, int maximumLabelsPerObject) async {
    try {
      final options = FirebaseObjectDetectorOptions(
        mode: DetectionMode.single,
        modelName: modelName,
        classifyObjects: true,
        multipleObjects: true,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );

      return ObjectDetector(options: options);
    } catch (exception) {
      print('Failed on loading your model to the TFLite interpreter: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

function 处理图像

  processImage(File file) async {
    emit(ShoddyModelProcessing(state.mainShoddyState.copyWith(message: 'Looking for objects on the selected image')));
    try {
      List<dynamic>? results = [];
      if (state.mainShoddyState.objectDetector != null) {
        InputImage inputImage = InputImage.fromFilePath(file.path);
        List<DetectedObject> objects = await state.mainShoddyState.objectDetector!.processImage(inputImage);
        if (objects.isNotEmpty) {
          List<ObjectModel> objects = results.map((result) => ObjectModel(result)).toList();
          emit(ShoddyModelProcessed(state.mainShoddyState.copyWith(objects: objects, filteredObjects: objects, message: 'Image processed with results')));
          changeMatchPercentage(0.35);
        } else {
          emit(ShoddyModelProcessed(state.mainShoddyState.copyWith(objects: [], filteredObjects: [], message: 'Image processed with no results')));
        }
      }
    } catch (error) {
      emit(ShoddyError(state.mainShoddyState.copyWith(message: error.toString())));
    }
  }

当我打电话时:

        List<DetectedObject> objects = await state.mainShoddyState.objectDetector!.processImage(inputImage);

我收到以下错误:

PlatformException(Error 3, com.google.visionkit.pipeline.error, Pipeline failed to fully start:
CalculatorGraph::Run() failed in Run: 
Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1)., null)

有什么我想念的吗?

您已将 model(不是数据库)导出为 TFLite model。 而你正在使用 MLKit 的检测 API。

为了使 TFLite model 与 MLKit 兼容,它需要接受 2 维或 4 维张量,如此所述。

导出的 model 似乎接受 3d 张量。 解决方法是将 go 恢复到您用于开发 model 的工具,并确保 model 接口为 4-d,规格根据本文档

根据机器学习套件文件,不可能使用 AutoML Vision 定制训练的 object 检测模型进行 object 检测

https://developers.google.com/ml-kit/custom-models#automl_vision_edge

Note: ML Kit only supports custom image classification models. Although AutoML Vision allows training of object detection models, these cannot be used with ML Kit.

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM