For decades CPUs did not have hardware floating point support. Floating point was implemented using using bit twiddling operations and integer math. This applied to both single and double precision. Starting with GLSL 1.30, GPUs have all of the necessary bit twiddling operator and integer math to do the same thing. GPUs natively support single precision, but only OpenGL 4.0 class GPUs have hardware support for double precision. The goal of this project is to implement a library of double precision operations in pure GLSL 1.30.