简体   繁体   中英

How to integrate tflite model in Android Studio to recognize sounds (Java language)

I need to create a project in android that gets input from microphone and classifies what it's being recorded. For this, I have created and trained a convolutional neural network model using keras in python language and Google Colab. The dataset was obtained as following: Using the UrbanSound8k dataset, I read the .wav files (using soundfile library in Python) and stored them in numpy arrays. Then I did some processing and turned each numpy array (corresponding to each sound file) into a 2D features matrix (10x40) (also in numpy format).

I trained a convolutional neural network using these "images features". After training, I saved my trained model in a .h5 file, then converted it into a .tflite model

Now I have to integrate this .tflite model into my android app, and I'm not sure how to implement this.

I need to get input as .wav files (from phone's microphone) and get the audio's samples in an array, then process the array to the 10x40 feature matrix, (so that it matches the input of the .tflite model). How can I do this in java code?

please see this link if helpful,it's last part covers how to build a app that runs the model in app. https://medium.com/@vvalouch/from-keras-to-android-with-tensorflow-lite-7581368aa23e

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM