18439 12269 sklearn.pipeline.Pipeline(step_0=automl.components.feature_preprocessing.one_hot_encoding.OneHotEncoderComponent,step_1=automl.util.sklearn.StackingEstimator(estimator=sklearn.ensemble._weight_boosting.AdaBoostClassifier),step_2=sklearn.tree._classes.DecisionTreeClassifier) sklearn.Pipeline(OneHotEncoderComponent,StackingEstimator,DecisionTreeClassifier) sklearn.pipeline.Pipeline 1 automl==0.0.1,openml==0.10.2,sklearn==0.22.1 Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``. 2020-05-20T18:57:57 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 memory None null Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming steps list [{"oml-python:serialized_object": "component_reference", "value": {"key": "step_0", "step_name": "step_0"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_1", "step_name": "step_1"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_2", "step_name": "step_2"}}] List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator verbose bool false If True, the time elapsed while fitting each step will be printed as it is completed. step_2 17504 11295 sklearn.tree._classes.DecisionTreeClassifier sklearn.DecisionTreeClassifier sklearn.tree._classes.DecisionTreeClassifier 3 openml==0.10.2,sklearn==0.22.1 A decision tree classifier. 2020-02-08T19:46:35 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 ccp_alpha non 0.0 Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ``ccp_alpha`` will be chosen. By default, no pruning is performed. See :ref:`minimal_cost_complexity_pruning` for details .. versionadded:: 0.22 class_weight dict null Weights associated with classes in the form ``{class_label: weight}`` If None, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}] The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` For multi-output, the weights of each column of y will be multiplied Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified criterion "gini" max_depth int null The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples max_features int null The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a fraction and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features max_leaf_nodes int null Grow a tree with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodes min_impurity_decrease float 0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value The weighted impurity decrease equation is the following:: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed .. versionadded:: 0.19 min_impurity_split float null Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. deprecated:: 0.19 ``min_impurity_split`` has been deprecated in favor of ``min_impurity_decrease`` in 0.19. The default value of ``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it will be removed in 0.25. Use ``min_impurity_decrease`` instead min_samples_leaf int or float 1 The minimum number of samples required to be at a leaf node A split point at any depth will only be considered if it leaves at least ``min_samples_leaf`` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for fractions min_samples_split int or float 2 The minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for fractions min_weight_fraction_leaf float 0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided presort deprecated "deprecated" This parameter is deprecated and will be removed in v0.24 .. deprecated:: 0.22 random_state int or RandomState null If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random` splitter "best" openml-python python scikit-learn sklearn sklearn_0.22.1 step_0 17714 12269 automl.components.feature_preprocessing.one_hot_encoding.OneHotEncoderComponent automl.OneHotEncoderComponent automl.components.feature_preprocessing.one_hot_encoding.OneHotEncoderComponent 1 automl==0.0.1,openml==0.10.2,sklearn==0.22.1 OneHotEncoderComponent A OneHotEncoder that can handle missing values and multiple categorical columns. 2020-05-18T21:55:11 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 openml-python python scikit-learn sklearn sklearn_0.22.1 step_1 17869 12269 automl.util.sklearn.StackingEstimator(estimator=sklearn.ensemble._weight_boosting.AdaBoostClassifier) automl.StackingEstimator automl.util.sklearn.StackingEstimator 1 automl==0.0.1,openml==0.10.2,sklearn==0.22.1 StackingEstimator A shallow wrapper around a classification algorithm to implement the transform method. Allows stacking of arbitrary classification algorithms in a pipelines. 2020-05-19T03:29:35 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 estimator PredictionMixin {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}} An instance implementing PredictionMixin estimator 17696 12269 sklearn.ensemble._weight_boosting.AdaBoostClassifier sklearn.AdaBoostClassifier sklearn.ensemble._weight_boosting.AdaBoostClassifier 2 openml==0.10.2,sklearn==0.22.1 An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This class implements the algorithm known as AdaBoost-SAMME [2]. 2020-05-18T19:37:50 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 algorithm "SAMME" base_estimator object null The base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is ``DecisionTreeClassifier(max_depth=1)`` learning_rate float 1.5568079200489067e-05 Learning rate shrinks the contribution of each classifier by ``learning_rate``. There is a trade-off between ``learning_rate`` and ``n_estimators`` algorithm : {'SAMME', 'SAMME.R'}, optional (default='SAMME.R') If 'SAMME.R' then use the SAMME.R real boosting algorithm ``base_estimator`` must support calculation of class probabilities If 'SAMME' then use the SAMME discrete boosting algorithm The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations n_estimators int 532 The maximum number of estimators at which boosting is terminated In case of perfect fit, the learning procedure is stopped early random_state int 42 If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. openml-python python scikit-learn sklearn sklearn_0.22.1 openml-python python scikit-learn sklearn sklearn_0.22.1 openml-python python scikit-learn sklearn sklearn_0.22.1