The estimation of camera motion plays an important role in visual effects (VFX) industry, which enables the interaction between the virtual and the reality. Currently Blender only supports solving camera motion from a single view, which sometimes does not give satisfactory solutions. In this project, I will implement a generic-purpose multi-view tracking system that incorporates witness cameras to strengthen the estimation result. This project is composed of a front-end UI integration mainly for users to specify matched marks across cameras, and back-end optimization engine that operates on the user input. This multi-view reconstruction system will increase the stability of the camera tracking solver and help artists create high-quality visual works.