{"flow":{"id":"17750","uploader":"12269","name":"sklearn.pipeline.Pipeline(step_0=sklearn.decomposition._factor_analysis.FactorAnalysis,step_1=sklearn.ensemble._weight_boosting.AdaBoostClassifier)","custom_name":"sklearn.Pipeline(FactorAnalysis,AdaBoostClassifier)","class_name":"sklearn.pipeline.Pipeline","version":"1","external_version":"openml==0.10.2,sklearn==0.22.1","description":"Pipeline of transforms with a final estimator.\n\nSequentially apply a list of transforms and a final estimator.\nIntermediate steps of the pipeline must be 'transforms', that is, they\nmust implement fit and transform methods.\nThe final estimator only needs to implement fit.\nThe transformers in the pipeline can be cached using ``memory`` argument.\n\nThe purpose of the pipeline is to assemble several steps that can be\ncross-validated together while setting different parameters.\nFor this, it enables setting parameters of the various steps using their\nnames and the parameter name separated by a '__', as in the example below.\nA step's estimator may be replaced entirely by setting the parameter\nwith its name to another estimator, or a transformer removed by setting\nit to 'passthrough' or ``None``.","upload_date":"2020-05-18T23:50:24","language":"English","dependencies":"sklearn==0.22.1\nnumpy>=1.6.1\nscipy>=0.9","parameter":[{"name":"memory","data_type":"None","default_value":"null","description":"Used to cache the fitted transformers of the pipeline. By default,\n no caching is performed. If a string is given, it is the path to\n the caching directory. Enabling caching triggers a clone of\n the transformers before fitting. Therefore, the transformer\n instance given to the pipeline cannot be inspected\n directly. Use the attribute ``named_steps`` or ``steps`` to\n inspect estimators within the pipeline. Caching the\n transformers is advantageous when fitting is time consuming"},{"name":"steps","data_type":"list","default_value":"[{\"oml-python:serialized_object\": \"component_reference\", \"value\": {\"key\": \"step_0\", \"step_name\": \"step_0\"}}, {\"oml-python:serialized_object\": \"component_reference\", \"value\": {\"key\": \"step_1\", \"step_name\": \"step_1\"}}]","description":"List of (name, transform) tuples (implementing fit\/transform) that are\n chained, in the order in which they are chained, with the last object\n an estimator"},{"name":"verbose","data_type":"bool","default_value":"false","description":"If True, the time elapsed while fitting each step will be printed as it\n is completed."}],"component":[{"identifier":"step_1","flow":{"id":"17696","uploader":"12269","name":"sklearn.ensemble._weight_boosting.AdaBoostClassifier","custom_name":"sklearn.AdaBoostClassifier","class_name":"sklearn.ensemble._weight_boosting.AdaBoostClassifier","version":"2","external_version":"openml==0.10.2,sklearn==0.22.1","description":"An AdaBoost classifier.\n\nAn AdaBoost [1] classifier is a meta-estimator that begins by fitting a\nclassifier on the original dataset and then fits additional copies of the\nclassifier on the same dataset but where the weights of incorrectly\nclassified instances are adjusted such that subsequent classifiers focus\nmore on difficult cases.\n\nThis class implements the algorithm known as AdaBoost-SAMME [2].","upload_date":"2020-05-18T19:37:50","language":"English","dependencies":"sklearn==0.22.1\nnumpy>=1.6.1\nscipy>=0.9","parameter":[{"name":"algorithm","data_type":[],"default_value":"\"SAMME\"","description":[]},{"name":"base_estimator","data_type":"object","default_value":"null","description":"The base estimator from which the boosted ensemble is built\n Support for sample weighting is required, as well as proper\n ``classes_`` and ``n_classes_`` attributes. If ``None``, then\n the base estimator is ``DecisionTreeClassifier(max_depth=1)``"},{"name":"learning_rate","data_type":"float","default_value":"1.5568079200489067e-05","description":"Learning rate shrinks the contribution of each classifier by\n ``learning_rate``. There is a trade-off between ``learning_rate`` and\n ``n_estimators``\n\nalgorithm : {'SAMME', 'SAMME.R'}, optional (default='SAMME.R')\n If 'SAMME.R' then use the SAMME.R real boosting algorithm\n ``base_estimator`` must support calculation of class probabilities\n If 'SAMME' then use the SAMME discrete boosting algorithm\n The SAMME.R algorithm typically converges faster than SAMME,\n achieving a lower test error with fewer boosting iterations"},{"name":"n_estimators","data_type":"int","default_value":"532","description":"The maximum number of estimators at which boosting is terminated\n In case of perfect fit, the learning procedure is stopped early"},{"name":"random_state","data_type":"int","default_value":"42","description":"If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by `np.random`."}],"tag":["openml-python","python","scikit-learn","sklearn","sklearn_0.22.1"]}},{"identifier":"step_0","flow":{"id":"17734","uploader":"12269","name":"sklearn.decomposition._factor_analysis.FactorAnalysis","custom_name":"sklearn.FactorAnalysis","class_name":"sklearn.decomposition._factor_analysis.FactorAnalysis","version":"1","external_version":"openml==0.10.2,sklearn==0.22.1","description":"Factor Analysis (FA)\n\nA simple linear generative model with Gaussian latent variables.\n\nThe observations are assumed to be caused by a linear transformation of\nlower dimensional latent factors and added Gaussian noise.\nWithout loss of generality the factors are distributed according to a\nGaussian with zero mean and unit covariance. The noise is also zero mean\nand has an arbitrary diagonal covariance matrix.\n\nIf we would restrict the model further, by assuming that the Gaussian\nnoise is even isotropic (all diagonal entries are the same) we would obtain\n:class:`PPCA`.\n\nFactorAnalysis performs a maximum likelihood estimate of the so-called\n`loading` matrix, the transformation of the latent variables to the\nobserved ones, using SVD based approach.","upload_date":"2020-05-18T23:44:12","language":"English","dependencies":"sklearn==0.22.1\nnumpy>=1.6.1\nscipy>=0.9","parameter":[{"name":"copy","data_type":"bool","default_value":"false","description":"Whether to make a copy of X. If ``False``, the input X gets overwritten\n during fitting"},{"name":"iterated_power","data_type":"int","default_value":"3","description":"Number of iterations for the power method. 3 by default. Only used\n if ``svd_method`` equals 'randomized'"},{"name":"max_iter","data_type":"int","default_value":"7723","description":"Maximum number of iterations"},{"name":"n_components","data_type":"int","default_value":"3","description":"Dimensionality of latent space, the number of components\n of ``X`` that are obtained after ``transform``\n If None, n_components is set to the number of features"},{"name":"noise_variance_init","data_type":"None","default_value":"null","description":"The initial guess of the noise variance for each feature\n If None, it defaults to np.ones(n_features)\n\nsvd_method : {'lapack', 'randomized'}\n Which SVD method to use. If 'lapack' use standard SVD from\n scipy.linalg, if 'randomized' use fast ``randomized_svd`` function\n Defaults to 'randomized'. For most applications 'randomized' will\n be sufficiently precise while providing significant speed gains\n Accuracy can also be improved by setting higher values for\n `iterated_power`. If this is not sufficient, for maximum precision\n you should choose 'lapack'"},{"name":"random_state","data_type":"int","default_value":"42","description":"If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by `np.random`. Only used when ``svd_method`` equals 'randomized'."},{"name":"svd_method","data_type":[],"default_value":"\"lapack\"","description":[]},{"name":"tol","data_type":"float","default_value":"1.841563435236402","description":"Stopping tolerance for log-likelihood increase"}],"tag":["openml-python","python","scikit-learn","sklearn","sklearn_0.22.1"]}}],"tag":["openml-python","python","scikit-learn","sklearn","sklearn_0.22.1"]}}