Building platforms for reproducible AI research
CloudCV is an open-source cloud platform led by graduate students and faculty at the Machine Learning and Perception Lab at Georgia Tech, with the aim of building tools for reproducible and accessible AI research and development. At CloudCV, we are building tools that enable researchers to build, compare, and share start-of-the-algorithms. We believe that one shouldn’t have to be an AI expert to have access to cutting-edge vision algorithms. Likewise, researchers shouldn’t have to worry about building a service around their deep learning models to showcase and share it with others.
We are building a platform called EvalAI as a scalable solution for the research community to fulfill the critical need for evaluating machine learning models. This will help researchers, students, and data scientists to create, collaborate, and participate in AI challenges organized around the globe. By simplifying and standardizing the process of benchmarking these models, we seek to lower the barrier to entry for participating in the global scientific effort to push the frontiers of machine learning and artificial intelligence, thereby increasing the rate of measurable progress in this domain.
The platform is used by more than 25 organizations from industry and academia such as Facebook, eBay, IBM, Stanford, MIT, Georgia Tech, etc. It has hosted 90+ AI challenges with 9500+ users who have created 100k+ submissions. It has more than 120+ open-source contributors and 1.3M+ page views since its launch in 2017. Several research organizations like Mapiallary research, IBM research, etc. are using its forked version for hosting their internal challenges instead of reinventing the wheel.
CloudCV 2021 Projects
Improvements in EvalAI frontendThis project involves fixing the last remaining kinks in the EvalAI UI. The goal of this project would be to improve the new UI as we replace the...
Monitoring setup for EvalAIAs the number of challenges on EvalAI are increasing, we want to focus on improving the performance of our services. As a first step, we will focus...
Static code upload challenge evaluation and enhancements in GitHub based challenge creationEvalAI is a platform to host and participate in AI challenges around the globe. To a challenge host, reproducibility of submission results and...