Currently the human assistive system collects the EEG data, processes it and trains customized classifiers. With the increasing number of tested subjects, the goal is to store the rapidly growing data in a distributed storage system such as Apache Hadoop. Data processing would also be implemented on the distributed system using the MapReduce framework and its extension Apache Spark.
The goal of this project is to create a scalable system which would enable storage of very large datasets and quick, distributed training of classifiers on those large datasets. Another goal is to provide the users with a GUI for browsing and managing the distributed filesystem as well as building full machine learning pipelines.