Contributor
Wenhe Li

Productionize XNNPACK Machine Learning Backend for Chrome OS


Mentors
Alan Green, Jim Pollock
Organization
Chromium

Chrome OS enables fast and relatively low energy-consuming by utilizing TensorFlow Lite and hardware ML accelerators. The hardware accelerators are great and give good performance in terms of model inference. However, not all operators will be implemented by hardware. And in this case, we need to fallback to a generic solution (CPU backend) to guarantee the model will get run successfully.

The key is to implement a high-performance and stable CPU backend and XNNPACK gives us the opportunity to do so by simply porting its implementation to Chrome OS codebase and align the API and internal calls with our NNAPI.

By porting the most commonly used ops to Chrome OS in XNNPACK, we can run simple models like MobileNet more efficiently than the existing reference operation implementations. with little effort. This would be a big plus to the model and model inference in Chrome OS eco-system.