In order to study the fMRI images it is necessary to do a preprocessing , which consist of motion correction, slice timing and realignment, spatial filtering, temporal filtering [2], smoothing, normalization, and segmentation (divide the brain in gray matter and white matter). All this process takes time for a single subject, and doing this process for a set of subjects and cases could take days of processing. GPU programming started to increase on 2007 [1] when NVIDIA launched CUDA, now there are more tools that allow to do GPU programming like OpenCL and others in java and python. Big Data needs to use the new technologies for programming and deal with large data, that's what the use of GPU programming and Spark do.

Organization

Student

Aldo Camargo

Mentors

  • Eric Ho
close

2017