Flutter Plugin for MediaPipe Machine Learning Library
- Mentors
- Paul Ruiz
- Organization
- TensorFlow
- Technologies
- swift, kotlin, flutter, dart
- Topics
- machine learning, deep learning, Cross-Platform Mobile
This project outlines the development of a Flutter plugin for MediaPipe, with a focus on a feature-first approach. The plan includes implementing the Text Task API, Audio Task API, and Vision Task API, which covers Hand Landmark Detection, Gesture Recognition, and Image Classification and Embedding. The project also involves testing and documenting each feature, as well as creating demo apps to showcase them. The project is estimated to take approximately 12 weeks, with each feature being developed and tested in a separate 1-3 week timeframe. The final week will be dedicated to finalizing documentation, creating code tutorials, and preparing the plugin for publication. The goal is to create a high-quality, well-documented plugin that is compatible with different platforms and easy to use for developers.
The main goals for the finished result of the project are:
- Multi-platform Flutter plugin for MediaPipe - The primary deliverable of the project is a Flutter plugin that is compatible with both iOS and Android.
- Testing and documentation - Each feature must be thoroughly tested and documented to ensure that they work as intended and that developers can easily use them.
- Demo apps - Creating demo apps to showcase each feature is an essential part of the project.
- Interactive Codelab tutorials - While not required, the creation of interactive Codelab tutorials would be a valuable addition to the project, particularly for the documentation phase.