262 results
Description to be added
estimation procedure
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
The number of observations in the current subgroup.
evaluation measure
Subgroup discovery measure.
evaluation measure
Subgroup discovery measure.
evaluation measure
The amount of positives in the subgroup
evaluation measure
The probability of a subgroup.
evaluation measure
The quality of the founded subgroup
evaluation measure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Description to be added
estimation procedure
Description to be added
estimation procedure
Description to be added
estimation procedure
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
The area under the ROC curve (AUROC), calculated using the Mann-Whitney U-test. The curve is constructed by shifting the threshold for a positive prediction from 0 to 1, yielding a series of true…
evaluation measure
The time in seconds to build a single model on all data.
evaluation measure
The memory, in bytes, needed to build a single model on all data.
evaluation measure
Used for survival Analysis
evaluation measure
Entropy, in bits, of the class distribution generated by the model's predictions. Calculated by taking the sum of -log2(predictedProb) over all instances, where predictedProb is the probability…
evaluation measure
Entropy reduction, in bits, between the class distribution generated by the model's predictions, and the prior class distribution. Calculated by taking the difference of the prior_class_complexity and…
evaluation measure
The confusion matrix, or contingency table, is a table that summarizes the number of instances that were predicted to belong to a certain class, versus their actual class. It is an NxN matrix where N…
evaluation measure
The sample Pearson correlation coefficient, or 'r': r = \frac{\sum ^n _{i=1}(X_i - \bar{X})(Y_i - \bar{Y})}{\sqrt{\sum ^n _{i=1}(X_i - \bar{X})^2} \sqrt{\sum ^n _{i=1}(Y_i - \bar{Y})^2}}…
evaluation measure
The F-Measure is the harmonic mean of precision and recall, also known as the the traditional F-measure, balanced F-score, or F1-score: Formula: 2*Precision*Recall/(Precision+Recall) See:…
evaluation measure
Cohen's kappa coefficient is a statistical measure of agreement for qualitative (categorical) items: it measures the agreement of prediction with the true class – 1.0 signifies complete agreement.…
evaluation measure
The Kononenko and Bratko Information score, divided by the prior entropy of the class distribution. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion for classi er's performance.…
evaluation measure
Bias component (squared) of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi & D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc.…
evaluation measure
Error rate measured in the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi & D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc. of the…
evaluation measure
Intrinsic error component (squared) of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi and D. Wolpert (1996), Bias plus variance decomposition for zero-one loss…
evaluation measure
Variance component of the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi and D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc. of…
evaluation measure
Kononenko and Bratko Information score. This measures predictive accuracy but eliminates the influence of prior probabilities. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion…
evaluation measure
The Matthews correlation coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very…
evaluation measure
The mean absolute error (MAE) measures how close the model's predictions are to the actual target values. It is the sum of the absolute value of the difference of each instance prediction and the…
evaluation measure
The entropy of the class distribution generated by the model (see class_complexity), divided by the number of instances in the input data.
evaluation measure
The entropy gain of the class distribution by the model over the prior distribution (see class_complexity_gain), divided by the number of instances in the input data.
evaluation measure
Unweighted(!) macro-average F-Measure. In macro-averaging, F-measure is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
Kononenko and Bratko Information score, see kononenko_bratko_information_score, divided by the number of instances in the input data. See: Kononenko, I., Bratko, I.: Information-based evaluation…
evaluation measure
Unweighted(!) macro-average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The mean prior absolute error (MPAE) is the mean absolute error (see mean_absolute_error) of the prior (e.g., default class prediction). See: http://en.wikipedia.org/wiki/Mean_absolute_error
evaluation measure
The entropy of the class distribution of the prior (see prior_class_complexity), divided by the number of instances in the input data.
evaluation measure
Unweighted(!) macro-average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The macro weighted (by class size) average area_under_ROC_curve (AUROC). In macro-averaging, AUROC is computed locally over each category ?rst and then the average over all categories is taken,…
evaluation measure
The macro weighted (by class size) average F-Measure. In macro-averaging, F-measure is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…
evaluation measure
The macro weighted (by class size) average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…
evaluation measure
The macro weighted (by class size) average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken, weighted by the number of…
evaluation measure
The number of instances used for this evaluation.
evaluation measure
Default information about OS, JVM, installations, etc.
evaluation measure
Precision is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false positives (TP+FP): \text{Precision}=\frac{tp}{tp+fp} \, …
evaluation measure
The Predictive Accuracy is the percentage of instances that are classified correctly. Is it 1 - ErrorRate.
evaluation measure
Entropy, in bits, of the prior class distribution. Calculated by taking the sum of -log2(priorProb) over all instances, where priorProb is the prior probability of the actual class for that instance.…
evaluation measure
Entropy, in bits, of the prior class distribution. Calculated by taking the sum of -log2(priorProb) over all instances, where priorProb is the prior probability of the actual class for that instance.…
evaluation measure
Every GB of RAM deployed for 1 hour equals one RAM-Hour.
evaluation measure
Recall is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false negatives (TP+FN): \text{Recall}=\frac{tp}{tp+fn} \, It is…
evaluation measure
The Relative Absolute Error (RAE) is the mean absolute error (MAE) divided by the mean prior absolute error (MPAE).
evaluation measure
The Root Mean Prior Squared Error (RMPSE) is the Root Mean Squared Error (RMSE) of the prior (e.g., the default class prediction).
evaluation measure
The Root Mean Squared Error (RMSE) measures how close the model's predictions are to the actual target values. It is the square root of the Mean Squared Error (MSE), the sum of the squared differences…
evaluation measure
The Root Relative Squared Error (RRSE) is the Root Mean Squared Error (RMSE) divided by the Root Mean Prior Squared Error (RMPSE). See root_mean_squared_error and root_mean_prior_squared_error.
evaluation measure
Runtime in seconds of the entire run. In the case of cross-validation runs, this will include all iterations.
evaluation measure
Amount of memory, in bytes, used during the entire run.
evaluation measure
Amount of virtual memory, in bytes, used during the entire run.
evaluation measure
A benchmark tool which measures (single core) CPU performance on the JVM.
evaluation measure
Number of instances that were not classified by the model.
evaluation measure
The time in milliseconds to build and test a single model on all data.
evaluation measure
The time in milliseconds to test a single model on all data.
evaluation measure
The time in milliseconds to build a single model on all data.
evaluation measure
Bias component (squared) of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning, 40(2),…
evaluation measure
Intrinsic error component (squared) of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning,…
evaluation measure
Variance component of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning, 40(2), pages…
evaluation measure
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
The number of classes (distinct nominal values) in the target feature. Generated by the Fantail library.
data quality
Entropy of the class attribute, generated by the Fantail library. It determines the amount of information needed to specify the class of an instance, or how `informative' the attributes need to be. A…
data quality
The AUC performance of a decision stump (one-level decision tree) trained on the data. Landmarking meta-feature generated by the Fantail library.
data quality
The error rate of a decision stump trained on the data. Landmarking meta-feature generated by the Fantail library.
data quality
Kappa statistic performance of a decision stump trained on the data. Landmarking meta-feature generated by the Fantail library.
data quality