17432 1 sklearn.ensemble.forest.ExtraTreesClassifier sklearn.ExtraTreesClassifier sklearn.ensemble.forest.ExtraTreesClassifier 14 openml==0.10.2,sklearn==0.21.3 An extra-trees classifier. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. 2019-11-22T02:17:14 English sklearn==0.21.3 numpy>=1.6.1 scipy>=0.9 bootstrap boolean false Whether bootstrap samples are used when building trees. If False, the whole datset is used to build each tree class_weight dict null Weights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}] The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` The "balanced_subsample" mode is the same as "balanced" except that weights are computed based on the bootstrap sample for every tree grown For multi-output, the weights of each column of y will be multiplied Note that these weights will be multiplied wit... criterion string "gini" The function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain max_depth integer or None null The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples max_features int "auto" The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a fraction and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features max_leaf_nodes int or None null Grow trees with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodes min_impurity_decrease float 0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value The weighted impurity decrease equation is the following:: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed .. versionadded:: 0.19 min_impurity_split float null Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. deprecated:: 0.19 ``min_impurity_split`` has been deprecated in favor of ``min_impurity_decrease`` in 0.19. The default value of ``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it will be removed in 0.25. Use ``min_impurity_decrease`` instead min_samples_leaf int 1 The minimum number of samples required to be at a leaf node A split point at any depth will only be considered if it leaves at least ``min_samples_leaf`` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for fractions min_samples_split int 2 The minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for fractions min_weight_fraction_leaf float 0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided n_estimators integer "warn" The number of trees in the forest .. versionchanged:: 0.20 The default value of ``n_estimators`` will change from 10 in version 0.20 to 100 in version 0.22 n_jobs int or None null The number of jobs to run in parallel for both `fit` and `predict` ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details oob_score bool false Whether to use out-of-bag samples to estimate the generalization accuracy random_state int 0 If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random` verbose int 0 Controls the verbosity when fitting and predicting warm_start bool false When set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See :term:`the Glossary <warm_start>` openml-python python scikit-learn sklearn sklearn_0.21.3