TensorFlow is a popular machine learning and deep learning framework developed by Google. Despite its recent launch in late 2015, it has been widely adopted both in industry and academia. The TensorFlow community recently (in Jan ’17) open sourced a JIT Compiler for linear algebra computations (XLA) to improve speed of execution and reduce memory usage of the program.
The LLVM and Polly Labs community has developed an optimization tool, Polly, that works on Polyhedral representations of programs, performing automatic parallelization and data locality optimizations. Polly directly extracts this polyhedral representation of the program and then performs a number of loop optimizations, including tiling and vectorization.
The problem statement I am proposing is, to enable Polly's optimizations for XLA. It is well understood that Polly's optimizations work especially well on programs involving deeply nested array update codes like stencils, dense linear algebra kernels, etc, and hence, have applicability in a machine learning library like TensorFlow.