Measure

recall

Recall is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false negatives (TP+FN): $$\text{Recall}=\frac{tp}{tp+fn} \, $$ It is also referred to as the True Positive Rate (TPR) or Sensitivity. See: http://en.wikipedia.org/wiki/Precision_and_recall Recall is defined only for a specific class value, and should thus be labeled with the class value for which is was computed. Use the mean_weighted_recall for the weighted average over all class values.

Source Code:
WEKA's Evaluation.truePositiveRate(int classIndex):

 /**
   * Calculate the true positive rate with respect to a particular class. 
   * This is defined as

*

   * correctly classified positives
   * ------------------------------
   *       total positives
   * 
* * @param classIndex the index of the class to consider as "positive" * @return the true positive rate */ public double truePositiveRate(int classIndex) { double correct = 0, total = 0; for (int j = 0; j < m_NumClasses; j++) { if (j == classIndex) { correct += m_ConfusionMatrix[classIndex][j]; } total += m_ConfusionMatrix[classIndex][j]; } if (total == 0) { return 0; } return correct / total; }

Properties

Minimum value0
Maximum value0
Unit
OptimizationHigher is better