This project aims to bring post-training integer quantization to OpenCV’s DNN module and perform inference using 8-bit integer inputs and fixed-point arithmetic. Using a simple quantize() function, any Net object can be quantized, resulting in roughly 1.5x faster inference, 4x reduction in memory consumption and accuracies close to floating-point inference.

Organization

Student

Jebastin Nadar

Mentors

  • Vadim Pisarevsky
close

2021