220 results
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
A custom holdout partitions a set of observations into a training set and a test set in a predefined way. This is typically done in order to compare the performance of different predictive algorithms…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Leave-on-out is a special case of cross-validation where the number of folds equals the number of instances. Thus, models are always evaluated on one instance and trained on all others. Leave-one-out…
estimation procedure
The area under the ROC curve (AUROC), calculated using the Mann-Whitney U-test. The curve is constructed by shifting the threshold for a positive prediction from 0 to 1, yielding a series of true…
evaluation measure
The F-Measure is the harmonic mean of precision and recall, also known as the the traditional F-measure, balanced F-score, or F1-score: Formula: 2*Precision*Recall/(Precision+Recall) See:…
evaluation measure
Unweighted(!) macro-average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The mean prior absolute error (MPAE) is the mean absolute error (see mean_absolute_error) of the prior (e.g., default class prediction). See: http://en.wikipedia.org/wiki/Mean_absolute_error
evaluation measure
The entropy of the class distribution of the prior (see prior_class_complexity), divided by the number of instances in the input data.
evaluation measure
The number of instances used for this evaluation.
evaluation measure
Default information about OS, JVM, installations, etc.
evaluation measure
The Predictive Accuracy is the percentage of instances that are classified correctly. Is it 1 - ErrorRate.
evaluation measure
Runtime in seconds of the entire run. In the case of cross-validation runs, this will include all iterations.
evaluation measure
Number of instances that were not classified by the model.
evaluation measure
The time in milliseconds to build and test a single model on all data.
evaluation measure
Intrinsic error component (squared) of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning,…
evaluation measure
Kappa statistic performance of a decision stump trained on the data. Landmarking meta-feature generated by the Fantail library.
data quality
Skewness over all features. Usually, the min,max and mean are calculated. Skewness is a measure of how non-normal a feature's value distribution is. Many learning algorithms assume normality. Negative…
data quality
Stream landmarker
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
Joint entropies of every attribute and the class attribute. Usually, the min,max and mean are calculated. The joint entropy defines how much information is shared between each attribute and the class…
data quality
Stream landmarker
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
The number of features (attributes) in the dataset. Also known as the dimensionality of the dataset.
data quality
The number of instances (examples) in the database.
data quality
The number of numeric features (attributes) in the dataset.
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
empirically calculated average ratio of bias error in the total error, using Kohavi-Wolpert's definition of bias and variance
flow quality
No data.
flow quality
No data.
flow quality
empirically calculated average ratio of variance error in the total error, using Kohavi-Wolpert's definition of bias and variance
flow quality
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
No data.
evaluation measure
Used for survival Analysis
evaluation measure
Entropy reduction, in bits, between the class distribution generated by the model's predictions, and the prior class distribution. Calculated by taking the difference of the prior_class_complexity and…
evaluation measure
Cohen's kappa coefficient is a statistical measure of agreement for qualitative (categorical) items: it measures the agreement of prediction with the true class – 1.0 signifies complete agreement.…
evaluation measure
The Kononenko and Bratko Information score, divided by the prior entropy of the class distribution. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion for classi er's performance.…
evaluation measure
Kononenko and Bratko Information score. This measures predictive accuracy but eliminates the influence of prior probabilities. See: Kononenko, I., Bratko, I.: Information-based evaluation criterion…
evaluation measure
The entropy of the class distribution generated by the model (see class_complexity), divided by the number of instances in the input data.
evaluation measure
Entropy, in bits, of the prior class distribution. Calculated by taking the sum of -log2(priorProb) over all instances, where priorProb is the prior probability of the actual class for that instance.…
evaluation measure
Recall is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false negatives (TP+FN): \text{Recall}=\frac{tp}{tp+fn} \, It is…
evaluation measure
The Root Relative Squared Error (RRSE) is the Root Mean Squared Error (RMSE) divided by the Root Mean Prior Squared Error (RMPSE). See root_mean_squared_error and root_mean_prior_squared_error.
evaluation measure
Amount of memory, in bytes, used during the entire run.
evaluation measure
Amount of virtual memory, in bytes, used during the entire run.
evaluation measure
DataQuality extracted from Fantail Library
data quality
Entropy of the class attribute. It determines the amount of information needed to specify the class of an instance, or how `informative' the attributes need to be. A low class entropy means that the…
data quality
Absolute skewness values over all features. Usually, the min,max and mean are calculated. Skewness is a measure of how non-normal a feature's value distribution is. Many learning algorithms assume…
data quality
Stream landmarker
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantainoil Library
data quality
DataQuality extracted from Fantail Library
data quality
Predictive accuracy of WEKA's 1-Rule algorithm. Determines how much information is contained in the most predictive attribute.
data quality
Predictive accuracy of WEKA's Naive Bayes algorithm with default parameter settings. Determines to what extent the features are conditionally independent. See Pfahringer et al. (2000) 'Meta-learning…
data quality
Box's M-statistic measures the equality of the covariance matrices of the different classes. If they are equal, then linear discriminants could be used, otherwise, quadratic discriminant functions…
data quality
Chi-squared distribution of the M-statistic over all features.
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
The number of instances that have the minority (least occurring) class
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
The number of classes in the class attribute.
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
No data.
flow quality
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation,…
estimation procedure
Holdout or random subsampling is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In a k% holdout,…
estimation procedure
Description to be added
estimation procedure
Error rate measured in the bias-variance decomposition as defined by Kohavi and Wolpert in: R. Kohavi & D. Wolpert (1996), Bias plus variance decomposition for zero-one loss functions, in Proc. of the…
evaluation measure
Unweighted(!) macro-average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken.
evaluation measure
The macro weighted (by class size) average Precision. In macro-averaging, Precision is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…
evaluation measure
The Root Mean Prior Squared Error (RMPSE) is the Root Mean Squared Error (RMSE) of the prior (e.g., the default class prediction).
evaluation measure
The predictive accuracy obtained by simply predicting the majority class.
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality
DataQuality extracted from Fantail Library
data quality