When you increase the sensitivity you can do this in small steps until (apparently) a true positive rate of about 65%. After that, the model makes a big jump to 100%. This can occur for instance when 35% of your positive cases have all the same value (eg zero).
Your model is computed for points, but you draw a line in between them.
If you want to have a positive rate in between 65% and 100% then you can do this by mixing the models with sensitivity of 65% and 100%. (See: Combining classifiers by flipping a coin<\/a> )
Theoretically, this might also occur if the positive and negative populations have the same distribution density (with only a difference in a constant) in the region where the predictor parameter is set for high sensitivity.
It might be that you don't compute the entire ROC curve, but the front-end that plots the curve automatically and completes the curve by connecting the end of your computed curve with the point 1,1
"Thank you so much for the explanation! When you say 'This can occur for instance when 35% of your positive cases have all the same value (eg zero).'<\/em> you don't mean negative cases? because if the value is 0 its mean that its negative cases
"
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.