简体   繁体   中英

AttributeError: 'Functional' object has no attribute 'predict_proba'

For printing the ROC curve with multiple models, I am facing this particular error. Help is needed

from tensorflow.keras.models import load_model
    def dense():
      return (load_model('DenseNet201.h5'))
    def mobile():
      return(load_model('MobileNet.h5'))
    def res():
      return(load_model('ResNet50V2.h5'))
    def vgg():
      return(load_model('VGG16.h5'))
    models = [
    {
        'label': 'DenseNet201',
        'model': dense(),
    },
    {
        'label': 'MobileNet',
        'model':mobile(),
    },
    {
        'label': 'ResNet50V2',
        'model':res(),
    },
    {
        'label': 'VGG16',
        'model':vgg(),
    }]
    from sklearn import metrics
    import matplotlib.pyplot as plt
    from tensorflow.keras.utils import to_categorical
    plt.figure()
    
    # Below for loop iterates through your models list
    for m in models:
        model = m['model'] # select the model
        #model.fit(X_train, y_train) # train the model
        y_pred=model.predict(X_test) # predict the test data
    # Compute False postive rate, and True positive rate
        #fpr, tpr, thresholds = metrics.roc_curve(y_test, model.y_pred_bin(X_test)[:,1])
        fpr, tpr, thresholds = metrics.roc_curve(y_test, model.predict_proba(X_test)[:,1])
    # Calculate Area under the curve to display on the plot
        auc = metrics.roc_auc_score(y_test,model.predict(X_test))
    # Now, plot the computed values
        plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], auc))
    # Custom settings for the plot 
    plt.plot([0, 1], [0, 1],'r--')
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.05])
    plt.xlabel('1-Specificity(False Positive Rate)')
    plt.ylabel('Sensitivity(True Positive Rate)')
    plt.title('Receiver Operating Characteristic')
    plt.legend(loc="lower right")
    plt.show()   # Display

I loaded my pretrained models in a function and returned it using this code. I made a list which will be iterated and it will call the function which loaded those models and hence ROC curve for each model will be plotted.

Full TraceBack

> AttributeError                            Traceback (most recent call
> last) <ipython-input-43-f353a6208636> in <module>()
>      11 # Compute False postive rate, and True positive rate
>      12     #fpr, tpr, thresholds = metrics.roc_curve(y_test, model.y_pred_bin(X_test)[:,1])
> ---> 13     pred_prob = model.predict_proba(X_test)
>      14     fpr, tpr, thresholds = metrics.roc_curve(y_test, pred_prob[:,1])
>      15 # Calculate Area under the curve to display on the plot
> 
> AttributeError: 'Functional' object has no attribute 'predict_proba'

You can use instead this -

model.predict_on_batch(X_test)

Mind you , roc_curve or f1_score takes single valued outputs only. like [[1],[0],....[1]]

I had converted it to categorical in my case like [[0,1] , [1,0] , ... , [0,1]] , and used a softmax in my output

so model.predict_on_batch(X_test) also gave output likr [[0.3,0.7] , [0.9,0.1] ...[0.2,0.8]]

so I had to convert it to single valued output and then pass it through roc_curve or f1_score of sklearn with the following function:

  def y_(y):
    r = []  
    for i in y:
      if i[0] > 0.5:
        r.append([0])
      else:
        r.append([1])
    return np.array(r)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM