簡體   English   中英

嘗試檢測自定義 tflite model 上的對象時出現 PlatformException

[英]PlatformException when trying to detect objects on a custom tflite model

我使用 Cloud AutoML 訓練了一個自定義 model,它假設可以檢測一張紙上的標記。 我將數據集導出為 TFLite 文件,並將其托管在 firebase 上。

我設法下載了文件並很好地啟動了 objectDetector。 但在處理輸入圖像時出現錯誤。

這是我的代碼:

初始化cubit中的檢測器

  initialiseDetector({double confidenceThreshold = 0.5, int maximumLabelsPerObject = 10}) async {
    emit(ShoddyLoading(state.mainShoddyState.copyWith(message: 'Loading object detector')));
    try {
      ObjectDetector objectDetector = await ShoddyHelper.initialiseDetector(
        processingFromDownloadedFile: true,
        modelFile: state.mainShoddyState.modelFile,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );
      emit(ShoddyModelLoaded(state.mainShoddyState.copyWith(objectDetector: objectDetector, message: 'Ready to start processing images')));
    } catch (error) {
      emit(ShoddyError(state.mainShoddyState.copyWith(message: error.toString())));
    }
  }

用於下載或使用 model 文件的幫助程序/實用程序文件

  static Future<ObjectDetector> initialiseDetector({File? modelFile, bool processingFromDownloadedFile = true, required double confidenceThreshold, required int maximumLabelsPerObject}) async {
    if (processingFromDownloadedFile) {
      if (modelFile != null) {
        return await initializeLocalDetector(modelFile, confidenceThreshold, maximumLabelsPerObject);
      } else {
        File modelFile = await loadModelFileFromFirebase();
        return await initializeLocalDetector(modelFile, confidenceThreshold, maximumLabelsPerObject);
      }
    } else {
      return await initializeFirebaseDetector(confidenceThreshold, maximumLabelsPerObject);
    }
  }

// Download the model file from firebase first
  static Future<File> loadModelFileFromFirebase(String modelName) async {
    try {
      FirebaseModelDownloader downloader = FirebaseModelDownloader.instance;

      List<FirebaseCustomModel> models = await downloader.listDownloadedModels();
      for (FirebaseCustomModel model in models) {
        print('Name: ${model.name}');
      }

      FirebaseModelDownloadConditions conditions = FirebaseModelDownloadConditions(
        iosAllowsCellularAccess: true,
        iosAllowsBackgroundDownloading: false,
        androidChargingRequired: false,
        androidWifiRequired: false,
        androidDeviceIdleRequired: false,
      );

      FirebaseCustomModel model = await downloader.getModel(
        modelName,
        FirebaseModelDownloadType.latestModel,
        conditions,
      );

      File modelFile = model.file;

      return modelFile;
    } catch (exception) {
      print('Failed on loading your model from Firebase: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

  // Use a file downloaded from firebase
  static Future<ObjectDetector> initializeLocalDetector(File modelFile, double confidenceThreshold, int maximumLabelsPerObject) async {
    try {
      final options = LocalObjectDetectorOptions(
        mode: DetectionMode.single,
        modelPath: modelFile.path,
        classifyObjects: true,
        multipleObjects: true,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );

      return ObjectDetector(options: options);
    } catch (exception) {
      print('Failed on loading your model to the TFLite interpreter: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

  // Use the model file directly from firebase
  static Future<ObjectDetector> initializeFirebaseDetector(String modelName, double confidenceThreshold, int maximumLabelsPerObject) async {
    try {
      final options = FirebaseObjectDetectorOptions(
        mode: DetectionMode.single,
        modelName: modelName,
        classifyObjects: true,
        multipleObjects: true,
        confidenceThreshold: confidenceThreshold,
        maximumLabelsPerObject: maximumLabelsPerObject,
      );

      return ObjectDetector(options: options);
    } catch (exception) {
      print('Failed on loading your model to the TFLite interpreter: $exception');
      print('The program will not be resumed');
      rethrow;
    }
  }

function 處理圖像

  processImage(File file) async {
    emit(ShoddyModelProcessing(state.mainShoddyState.copyWith(message: 'Looking for objects on the selected image')));
    try {
      List<dynamic>? results = [];
      if (state.mainShoddyState.objectDetector != null) {
        InputImage inputImage = InputImage.fromFilePath(file.path);
        List<DetectedObject> objects = await state.mainShoddyState.objectDetector!.processImage(inputImage);
        if (objects.isNotEmpty) {
          List<ObjectModel> objects = results.map((result) => ObjectModel(result)).toList();
          emit(ShoddyModelProcessed(state.mainShoddyState.copyWith(objects: objects, filteredObjects: objects, message: 'Image processed with results')));
          changeMatchPercentage(0.35);
        } else {
          emit(ShoddyModelProcessed(state.mainShoddyState.copyWith(objects: [], filteredObjects: [], message: 'Image processed with no results')));
        }
      }
    } catch (error) {
      emit(ShoddyError(state.mainShoddyState.copyWith(message: error.toString())));
    }
  }

當我打電話時:

        List<DetectedObject> objects = await state.mainShoddyState.objectDetector!.processImage(inputImage);

我收到以下錯誤:

PlatformException(Error 3, com.google.visionkit.pipeline.error, Pipeline failed to fully start:
CalculatorGraph::Run() failed in Run: 
Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1)., null)

有什么我想念的嗎?

您已將 model(不是數據庫)導出為 TFLite model。 而你正在使用 MLKit 的檢測 API。

為了使 TFLite model 與 MLKit 兼容,它需要接受 2 維或 4 維張量,如此所述。

導出的 model 似乎接受 3d 張量。 解決方法是將 go 恢復到您用於開發 model 的工具,並確保 model 接口為 4-d,規格根據本文檔

根據機器學習套件文件,不可能使用 AutoML Vision 定制訓練的 object 檢測模型進行 object 檢測

https://developers.google.com/ml-kit/custom-models#automl_vision_edge

Note: ML Kit only supports custom image classification models. Although AutoML Vision allows training of object detection models, these cannot be used with ML Kit.

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM