Contributor
Advik Basani

FaustNet: Enabling ML in Faust


Mentors
sletz, Thomas Rushton
Organization
GRAME
Technologies
Faust, Automatic Differentiation
Topics
machine learning
This proposal bridges the gap between audio processing and machine learning by introducing an automatic differentiation (AD) library for the Faust programming language. This library empowers audio engineers to leverage machine learning within their familiar Faust environment. The core of the library will be pre-defined building blocks. These include Faust primitives with built-in derivative functions for common math and audio processing tasks. Additionally, there will be helper functions to simplify program construction and various loss functions specifically suited for audio processing. The proposal also explores Faust Neural Net Blocks (FNNBs), which are pre-written code modules representing specific neural network functionalities. These FNNBs will save development time, improve code maintainability, and allow audio engineers to focus on network architecture rather than low-level programming for each layer. The project deliverables will showcase the utility of the AD library in various phases. Phase I will deliver a working example of a differentiable filter or envelope in Faust. Phase II will focus on implementing a full-scale machine learning algorithm. This will be achieved through an autodiff file and a weights file. Finally, Phase III and IV will deliver an autoencoder implemented using FNNBs. By combining the AD library, FNNBs, and an architecture file specifying the training process, users can design and train complex neural network models for audio processing tasks entirely within Faust. This eliminates the need to switch between Faust and separate machine learning frameworks, while offering the benefits of automatic differentiation for efficient model training.