18916
6691
sklearn.pipeline.Pipeline(imputation=sklearn.preprocessing.imputation.Imputer,classifier=sklearn.tree.tree.DecisionTreeClassifier)
sklearn.Pipeline(Imputer,DecisionTreeClassifier)
sklearn.pipeline.Pipeline
3
openml==0.12.2,sklearn==0.18.1
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement fit and transform methods.
The final estimator only needs to implement fit.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters.
For this, it enables setting parameters of the various steps using their
names and the parameter name separated by a '__', as in the example below.
A step's estimator may be replaced entirely by setting the parameter
with its name to another estimator, or a transformer removed by setting
to None.
2021-08-13T19:35:12
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
steps
list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "imputation", "step_name": "imputation"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "classifier", "step_name": "classifier"}}]
List of (name, transform) tuples (implementing fit/transform) that are
chained, in the order in which they are chained, with the last object
an estimator.
imputation
18879
6691
sklearn.preprocessing.imputation.Imputer
sklearn.Imputer
sklearn.preprocessing.imputation.Imputer
52
openml==0.12.2,sklearn==0.18.1
Imputation transformer for completing missing values.
2021-08-13T19:19:33
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
axis
integer
0
The axis along which to impute
- If `axis=0`, then impute along columns
- If `axis=1`, then impute along rows
copy
boolean
true
If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible. Note that, in the following cases,
a new copy will always be made, even if `copy=False`:
- If X is not an array of floating values;
- If X is sparse and `missing_values=0`;
- If `axis=0` and X is encoded as a CSR matrix;
- If `axis=1` and X is encoded as a CSC matrix.
missing_values
integer or
"NaN"
The placeholder for the missing values. All occurrences of
`missing_values` will be imputed. For missing values encoded as np.nan,
use the string value "NaN"
strategy
string
"median"
The imputation strategy
- If "mean", then replace missing values using the mean along
the axis
- If "median", then replace missing values using the median along
the axis
- If "most_frequent", then replace missing using the most frequent
value along the axis
verbose
integer
0
Controls the verbosity of the imputer
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
classifier
18895
6691
sklearn.tree.tree.DecisionTreeClassifier
sklearn.DecisionTreeClassifier
sklearn.tree.tree.DecisionTreeClassifier
66
openml==0.12.2,sklearn==0.18.1
A decision tree classifier.
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
class_weight
dict
null
Weights associated with classes in the form ``{class_label: weight}``
If not given, all classes are supposed to have weight one. For
multi-output problems, a list of dicts can be provided in the same
order as the columns of y
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
For multi-output, the weights of each column of y will be multiplied
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified
criterion
string
"gini"
The function to measure the quality of a split. Supported criteria are
"gini" for the Gini impurity and "entropy" for the information gain
max_depth
int or None
null
The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples
max_features
int
null
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split
- If float, then `max_features` is a percentage and
`int(max_features * n_features)` features are considered at each
split
- If "auto", then `max_features=sqrt(n_features)`
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features
max_leaf_nodes
int or None
null
Grow a tree with ``max_leaf_nodes`` in best-first fashion
Best nodes are defined as relative reduction in impurity
If None then unlimited number of leaf nodes
min_impurity_split
float
1e-07
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf
.. versionadded:: 0.18
min_samples_leaf
int
1
The minimum number of samples required to be at a leaf node:
- If int, then consider `min_samples_leaf` as the minimum number
- If float, then `min_samples_leaf` is a percentage and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node
.. versionchanged:: 0.18
Added float values for percentages
min_samples_split
int
2
The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number
- If float, then `min_samples_split` is a percentage and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split
.. versionchanged:: 0.18
Added float values for percentages
min_weight_fraction_leaf
float
0.0
The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided
presort
bool
false
Whether to presort the data to speed up the finding of best splits in
fitting. For the default settings of a decision tree on large
datasets, setting this to true may slow down the training process
When using either a smaller dataset or a restricted depth, this may
speed up the training.
random_state
int
null
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`
splitter
string
"best"
The strategy used to choose the split at each node. Supported
strategies are "best" to choose the best split and "random" to choose
the best random split
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1