I am working on Audio Classification and using Spectrograms and MFCC plots im order to train CNN image classifier. Currently, I have two separate ConvNets trained on these features giving average ( 55-60% accuracy ). I have two separate weight files for each model. Now I want to combine the two models ie I want to extract Spectrograms and MFCC from each audio file and test on my already built models and get higher accuracy. How can I do that?
One way to combine already trained models would be to use a common fully-connected layer and train the network.
You can place this fully-connected layer at the end of both the convolutional models.
So, input will go into ConVModel-1 and ConvModel-2. You will get 2 output vectors. Combine these 2 output vectors (concatenate, average, etc.). Now pass this newly formed vector to the fully connected layer.
You can now train this network in 2 ways -
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.