Multimodal sound monitoring: hydrophones, vessel tracking via AIS, radar, and camera systems
- Mentors
- Scott Veirs, Jesse, Praful Mathur, Paul Nguyen HD, val veirs
- Organization
- Orcasound
- Technologies
- python, computer vision, Tensorflow/PyTorch/Keras/Ketos - Deep Learning
- Topics
- Bioacoustic Source Separation, Vessel noise monitoring system
One of the goals of this project is to utilize Deep Learning models to better interpret underwater sounds. We are required to train neural networks streams of audio data to separate marine mammal vocalization sounds (orcas and humpback whales) from other background noise like that of ships and other vessels. Next, we have to train two different neural networks- one for classifying the whale vocalization sounds and the other one to classify ship noises. The second part of the project involves building an efficient vessel monitoring system. We will use AIS data to detect and track larger commercial vessel and some private yachts. The non-AIS boats can be tracked using RADAR and camera data using Marine Monitoring system. Once we are done training all the models, we can deploy the detectors on the cloud and on the Jetson Nano.