18922 6691 sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,estimator=sklearn.tree.tree.DecisionTreeClassifier) sklearn.Pipeline(Imputer,DecisionTreeClassifier) sklearn.pipeline.Pipeline 22 openml==0.12.2,sklearn==0.19.0 Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting to None. 2021-08-14T02:45:16 English sklearn==0.19.0 numpy>=1.8.2 scipy>=0.13.3 memory Instance of sklearn null Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming. steps list [{"oml-python:serialized_object": "component_reference", "value": {"key": "imputer", "step_name": "imputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": "estimator"}}] List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator imputer 18923 6691 sklearn.preprocessing.imputation.Imputer sklearn.Imputer sklearn.preprocessing.imputation.Imputer 53 openml==0.12.2,sklearn==0.19.0 Imputation transformer for completing missing values. 2021-08-14T02:45:16 English sklearn==0.19.0 numpy>=1.8.2 scipy>=0.13.3 axis integer 0 The axis along which to impute - If `axis=0`, then impute along columns - If `axis=1`, then impute along rows copy boolean true If True, a copy of X will be created. If False, imputation will be done in-place whenever possible. Note that, in the following cases, a new copy will always be made, even if `copy=False`: - If X is not an array of floating values; - If X is sparse and `missing_values=0`; - If `axis=0` and X is encoded as a CSR matrix; - If `axis=1` and X is encoded as a CSC matrix. missing_values integer or "NaN" The placeholder for the missing values. All occurrences of `missing_values` will be imputed. For missing values encoded as np.nan, use the string value "NaN" strategy string "mean" The imputation strategy - If "mean", then replace missing values using the mean along the axis - If "median", then replace missing values using the median along the axis - If "most_frequent", then replace missing using the most frequent value along the axis verbose integer 0 Controls the verbosity of the imputer openml-python python scikit-learn sklearn sklearn_0.19.0 estimator 18924 6691 sklearn.tree.tree.DecisionTreeClassifier sklearn.DecisionTreeClassifier sklearn.tree.tree.DecisionTreeClassifier 67 openml==0.12.2,sklearn==0.19.0 A decision tree classifier. 2021-08-14T02:45:16 English sklearn==0.19.0 numpy>=1.8.2 scipy>=0.13.3 class_weight dict null Weights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}] The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` For multi-output, the weights of each column of y will be multiplied Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified criterion string "gini" The function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain max_depth int or None null The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples max_features int null The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a percentage and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features max_leaf_nodes int or None null Grow a tree with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodes min_impurity_decrease float 0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value The weighted impurity decrease equation is the following:: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed .. versionadded:: 0.19 min_impurity_split float null Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. deprecated:: 0.19 ``min_impurity_split`` has been deprecated in favor of ``min_impurity_decrease`` in 0.19 and will be removed in 0.21 Use ``min_impurity_decrease`` instead min_samples_leaf int 1 The minimum number of samples required to be at a leaf node: - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a percentage and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for percentages min_samples_split int 2 The minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a percentage and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for percentages min_weight_fraction_leaf float 0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided presort bool false Whether to presort the data to speed up the finding of best splits in fitting. For the default settings of a decision tree on large datasets, setting this to true may slow down the training process When using either a smaller dataset or a restricted depth, this may speed up the training. random_state int null If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random` splitter string "best" The strategy used to choose the split at each node. Supported strategies are "best" to choose the best split and "random" to choose the best random split openml-python python scikit-learn sklearn sklearn_0.19.0 openml-python python scikit-learn sklearn sklearn_0.19.0