18428 12269 sklearn.pipeline.Pipeline(step_0=automl.components.feature_preprocessing.multi_column_label_encoder.MultiColumnLabelEncoderComponent,step_1=sklearn.preprocessing._discretization.KBinsDiscretizer,step_2=sklearn.ensemble._weight_boosting.AdaBoostClassifier) sklearn.Pipeline(MultiColumnLabelEncoderComponent,KBinsDiscretizer,AdaBoostClassifier) sklearn.pipeline.Pipeline 1 automl==0.0.1,openml==0.10.2,sklearn==0.22.1 Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``. 2020-05-20T18:19:46 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 memory None null Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming steps list [{"oml-python:serialized_object": "component_reference", "value": {"key": "step_0", "step_name": "step_0"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_1", "step_name": "step_1"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "step_2", "step_name": "step_2"}}] List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator verbose bool false If True, the time elapsed while fitting each step will be printed as it is completed. step_2 17696 12269 sklearn.ensemble._weight_boosting.AdaBoostClassifier sklearn.AdaBoostClassifier sklearn.ensemble._weight_boosting.AdaBoostClassifier 2 openml==0.10.2,sklearn==0.22.1 An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This class implements the algorithm known as AdaBoost-SAMME [2]. 2020-05-18T19:37:50 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 algorithm "SAMME" base_estimator object null The base estimator from which the boosted ensemble is built Support for sample weighting is required, as well as proper ``classes_`` and ``n_classes_`` attributes. If ``None``, then the base estimator is ``DecisionTreeClassifier(max_depth=1)`` learning_rate float 1.5568079200489067e-05 Learning rate shrinks the contribution of each classifier by ``learning_rate``. There is a trade-off between ``learning_rate`` and ``n_estimators`` algorithm : {'SAMME', 'SAMME.R'}, optional (default='SAMME.R') If 'SAMME.R' then use the SAMME.R real boosting algorithm ``base_estimator`` must support calculation of class probabilities If 'SAMME' then use the SAMME discrete boosting algorithm The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations n_estimators int 532 The maximum number of estimators at which boosting is terminated In case of perfect fit, the learning procedure is stopped early random_state int 42 If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. openml-python python scikit-learn sklearn sklearn_0.22.1 step_0 17711 12269 automl.components.feature_preprocessing.multi_column_label_encoder.MultiColumnLabelEncoderComponent automl.MultiColumnLabelEncoderComponent automl.components.feature_preprocessing.multi_column_label_encoder.MultiColumnLabelEncoderComponent 1 automl==0.0.1,openml==0.10.2,sklearn==0.22.1 MultiColumnLabelEncoderComponent A ColumnEncoder that can handle missing values and multiple categorical columns. 2020-05-18T21:55:00 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 columns List null List of column to be encoded openml-python python scikit-learn sklearn sklearn_0.22.1 step_1 17785 12269 sklearn.preprocessing._discretization.KBinsDiscretizer sklearn.KBinsDiscretizer sklearn.preprocessing._discretization.KBinsDiscretizer 1 openml==0.10.2,sklearn==0.22.1 Bin continuous data into intervals. 2020-05-19T00:02:40 English sklearn==0.22.1 numpy>=1.6.1 scipy>=0.9 encode "onehot" n_bins int or array 2 The number of bins to produce. Raises ValueError if ``n_bins < 2`` encode : {'onehot', 'onehot-dense', 'ordinal'}, (default='onehot') Method used to encode the transformed result onehot Encode the transformed result with one-hot encoding and return a sparse matrix. Ignored features are always stacked to the right onehot-dense Encode the transformed result with one-hot encoding and return a dense array. Ignored features are always stacked to the right ordinal Return the bin identifier encoded as an integer value strategy : {'uniform', 'quantile', 'kmeans'}, (default='quantile') Strategy used to define the widths of the bins uniform All bins in each feature have identical widths quantile All bins in each feature have the same number of points kmeans Values in each bin have the same nearest center of a 1D k-means cluster. strategy "uniform" openml-python python scikit-learn sklearn sklearn_0.22.1 openml-python python scikit-learn sklearn sklearn_0.22.1