In this project I will design and develop a semi-generic interface that enables Holmes Processing to manage the execution of advanced statistical and machine learning analysis operations. The system will have four core parts: core, analytic engine, rendering, and analytic services.

The core will receive and schedule taskings as well as manage their execution and monitoring their state. It will also take care of the backend management necessary for the analytic engines.

The analytic engine will provide a modular way to connect Machine Learning and statistical frameworks to the core and services to provide a generic way to connect to various technologies and vendors like Apache Spark and TensorFlow.

Rendering will provide ways for humans to interact with the system by providing a API and website to display running jobs as well as their schedules and results.

Analytic services provide the logic for the Machine Learning task. They should be able to daisy chain jobs together to leverage multiple engines at once, provide all necessary configuration for the job and the execution as well as an RESTful interface for communication.

(Quoted and extended from the official Honeynet idea page)



Christian von Pentz


  • webstergd
  • Ryan Harris