17488
10963
sklearn.pipeline.Pipeline(imputer=sklearn.impute._base.SimpleImputer,estimator=xgboost.sklearn.XGBClassifier)
sklearn.Pipeline(SimpleImputer,XGBClassifier)
sklearn.pipeline.Pipeline
1
openml==0.10.2,sklearn==0.21.2,xgboost==0.90
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator.
Intermediate steps of the pipeline must be 'transforms', that is, they
must implement fit and transform methods.
The final estimator only needs to implement fit.
The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be
cross-validated together while setting different parameters.
For this, it enables setting parameters of the various steps using their
names and the parameter name separated by a '__', as in the example below.
A step's estimator may be replaced entirely by setting the parameter
with its name to another estimator, or a transformer removed by setting
it to 'passthrough' or ``None``.
2020-01-10T21:36:36
English
sklearn==0.21.2
numpy>=1.6.1
scipy>=0.9
memory
None
null
Used to cache the fitted transformers of the pipeline. By default,
no caching is performed. If a string is given, it is the path to
the caching directory. Enabling caching triggers a clone of
the transformers before fitting. Therefore, the transformer
instance given to the pipeline cannot be inspected
directly. Use the attribute ``named_steps`` or ``steps`` to
inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming
steps
list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "imputer", "step_name": "imputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": "estimator"}}]
List of (name, transform) tuples (implementing fit/transform) that are
chained, in the order in which they are chained, with the last object
an estimator
verbose
boolean
false
If True, the time elapsed while fitting each step will be printed as it
is completed.
imputer
17407
1
sklearn.impute._base.SimpleImputer
sklearn.SimpleImputer
sklearn.impute._base.SimpleImputer
11
openml==0.10.2,sklearn==0.21.2
Imputation transformer for completing missing values.
2019-11-22T01:19:36
English
sklearn==0.21.2
numpy>=1.6.1
scipy>=0.9
add_indicator
boolean
false
If True, a `MissingIndicator` transform will stack onto output
of the imputer's transform. This allows a predictive estimator
to account for missingness despite imputation. If a feature has no
missing values at fit/train time, the feature won't appear on
the missing indicator even if there are missing values at
transform/test time.
copy
boolean
true
If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible. Note that, in the following cases,
a new copy will always be made, even if `copy=False`:
- If X is not an array of floating values;
- If X is encoded as a CSR matrix;
- If add_indicator=True
fill_value
string or numerical value
-1
When strategy == "constant", fill_value is used to replace all
occurrences of missing_values
If left to the default, fill_value will be 0 when imputing numerical
data and "missing_value" for strings or object data types
missing_values
number
NaN
The placeholder for the missing values. All occurrences of
`missing_values` will be imputed
strategy
string
"constant"
The imputation strategy
- If "mean", then replace missing values using the mean along
each column. Can only be used with numeric data
- If "median", then replace missing values using the median along
each column. Can only be used with numeric data
- If "most_frequent", then replace missing using the most frequent
value along each column. Can be used with strings or numeric data
- If "constant", then replace missing values with fill_value. Can be
used with strings or numeric data
.. versionadded:: 0.20
strategy="constant" for fixed value imputation
verbose
integer
0
Controls the verbosity of the imputer
openml-python
python
scikit-learn
sklearn
sklearn_0.21.2
estimator
17489
10963
xgboost.sklearn.XGBClassifier
xgboost.XGBClassifier
xgboost.sklearn.XGBClassifier
8
openml==0.10.2,sklearn==0.21.2,xgboost==0.90
Implementation of the scikit-learn API for XGBoost classification.
2020-01-10T21:36:36
English
sklearn==0.21.2
numpy>=1.6.1
scipy>=0.9
base_score
0.5
booster
"gbtree"
colsample_bylevel
float
1
Subsample ratio of columns for each level
colsample_bynode
float
1
Subsample ratio of columns for each split
colsample_bytree
float
1
Subsample ratio of columns when constructing each tree
gamma
float
0
Minimum loss reduction required to make a further partition on a leaf node of the tree
learning_rate
float
0.1
Boosting learning rate (xgb's "eta")
max_delta_step
int
0
Maximum delta step we allow each tree's weight estimation to be
max_depth
int
3
Maximum tree depth for base learners
min_child_weight
int
1
Minimum sum of instance weight(hessian) needed in a child
missing
float
null
Value in the data which needs to be present as a missing value. If
None, defaults to np.nan
importance_type: string, default "gain"
The feature importance type for the feature_importances_ property: either "gain",
"weight", "cover", "total_gain" or "total_cover"
n_estimators
int
100
Number of trees to fit
n_jobs
int
1
Number of parallel threads used to run xgboost. (replaces ``nthread``)
nthread
int
null
Number of parallel threads used to run xgboost. (Deprecated, please use ``n_jobs``)
objective
string or callable
"multi:softprob"
Specify the learning task and the corresponding learning objective or
a custom objective function to be used (see note below)
booster: string
Specify which booster to use: gbtree, gblinear or dart
random_state
int
42
Random number seed. (replaces seed)
reg_alpha
float
0
L1 regularization term on weights
reg_lambda
float
1
L2 regularization term on weights
scale_pos_weight
float
1
Balancing of positive and negative weights
base_score:
The initial prediction score of all instances, global bias
seed
int
null
Random number seed. (Deprecated, please use random_state)
silent
boolean
null
Whether to print messages while running boosting. Deprecated. Use verbosity instead
subsample
float
1
Subsample ratio of the training instance
verbosity
int
1
The degree of verbosity. Valid values are 0 (silent) - 3 (debug)
openml-python
python
scikit-learn
sklearn
sklearn_0.21.2
openml-python
python
scikit-learn
sklearn
sklearn_0.21.2