EvalAI is a platform to host and participate in AI challenges around the globe. To a challenge host, reproducibility of submission results and privacy of the test data are the main concerns. Towards this, the idea is to allow users to submit a docker image for their models and evaluate them on static datasets.
Moreover, Github-based challenge creation allows challenge hosts to use a private GitHub repository to create and manage updates in a challenge. The idea is to support bi-directional updates for challenges created using Github. This feature will allow hosts to sync changes from EvalAI UI to their challenge Github repository.