Sensitivity, specificity, ROC, AUC …

1 min read

You can’t believe how much jargon there is in binary classification. Just remember the following diagram (from wiki).

accuracy = ( TP + TN ) / (P+N), i.e. correctly classified divided by the total
false discovery rate (FDR) = TP / (TP+FP), i.e. correctly classified as positive, divided by all cases classified as positive

ROC (Receiver operating characteristic) is simply the plot of sensitivity against 1-specificity

AUC is the area under the ROC curve

ROC curve is close to the diagonal line if the two categories are mixed and difficult to classify; it will be high if the two categories are fully separated. Here I plot ROC curve in three simulated data with different overlaps between the two categories to be classified.

What’s the meaning of AUC? wiki says:

The AUC is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one.

This is hard to understand.

A single classifier won’t produce a curve; it only produces a single point (i.e. a single value of sensitivity and specificity). For example, we have 100 people and we want to know their gender based their heights and weights.  If our classifier is “male if height larger than 1.7m”, then this classifier only produces a point.

A class of classifiers will produce a curve. Assume we have a class of classifier called “classify male/female based on height”. Then by changing the threshold we will achieve a curve (ROC).

Then there are many classes of classifiers. For example, we can have a class called “classify by weight”, or “classify by weight and height linearly”, or “classify by weight and height nonlinearly”, etc. It’s likely the ROC produced by class “classify by weight and height linearly” is higher than the ROC produced by “classify by height” and thus produces a larger value of AUC.

So AUC is a property of a class of classifier, not a single classifier. But what does it exactly mean? …



写作助手,把中式英语变成专业英文


Want to receive new post notification? 有新文章通知我

第五十一期fNIRS Journal Club通知2024/05/11, 10am 雷心博士

机器人通常以合作者的身份出现,以礼貌、鼓励、友好的方式与人类互动,但如果有一个竞争导向的机器人出现会如何?人们更喜欢与合作导向的机器人互动,还是更愿意通过竞争来激发自身动力呢?当我们与机器人合作或竞争
Wanling Zhu
13 sec read

第五十期fNIRS Journal Club视频 王一晖

Youtube: https://youtu.be/a2QlCFZUytA优酷:https://v.youku.com/v_show/id_XNjM3MjMyNjUxMg==.html 早期的 STE
Wanling Zhu
13 sec read

第五十期fNIRS Journal Club通知2024/03/30, 10am 王一晖

早期的 STEM 教育对于以后的学习至关重要。现有研究尚未就STEM教学法达成共识,学生先验知识对基于故事的STEM教学法的影响还有待探讨。来自澳门大学张娟教授团队的王一晖将会分享基于fNIRS超扫描
Wanling Zhu
9 sec read

5 Replies to “Sensitivity, specificity, ROC, AUC …”

  1. AUC is a measure of degree of discimination (for a binary variable) using a predictor or set of predictors.

    It ranges from 0.5-1.0. But this is just one of the many conrcordance measures in Statistics.

    If you have done data analyses before and performed a hypothesis test, say it was significant (i.e. reject null) does that mean that the null is not true?

  2. hi,dr

    I’m a student in master.
    after I train AAN I want to compute accuracy,sensitivity,percision, specificity but with confusion matrix sensitivity and specificity have the same result.
    can you help me to find a good code for compute performance of classifier.
    thanks a lot.
    hoda zamani

  3. Dear Sir,

    If I have two binary images, one is manually segmented and other is test result. In such case how to calculate those parameters.

    Thanks

Leave a Reply to Xu Cui Cancel reply

Your email address will not be published. Required fields are marked *