In supervised classification, you are given an input dataset in which instances are labeled with a certain class. The goal is to build a model that predicts the class for future unlabeled instances. The model is evaluated using a train-test procedure, e.g. cross-validation.
To make results by different users comparable, you are given the exact train-test folds to be used, and you need to return at least the predictions generated by your model for each of the test instances. OpenML will use these predictions to calculate a range of evaluation measures on the server.
You can also upload your own evaluation measures, provided that the code for doing so is available from the implementation used. For extremely large datasets, it may be infeasible to upload all predictions. In those cases, you need to compute and provide the evaluations yourself.
Optionally, you can upload the model trained on all the input data. There is no restriction on the file format, but please use a well-known format or PMML.
How to submit runs
Using your favorite machine learning environment
Download this task directly in your environment and automatically upload your results
From your own software
Use one of our APIs to download data from OpenML and upload your results