For one of our customers we are seeking an experienced front-end developer with a track record in data lake / big data projects.
We are looking for a skilled and self-driven consultant to join an ambitious team delivering a highly profiled data aggregation and analysis project for Research. The project methodology is agile, so we seek a responsible team-player with experience working in agile development teams.
The Data Lake platform runs on a Hadoop cluster (Hortonworks distribution), using PostgreSQL, Hive Metastore and Presto, and with processing pipelines built using Python and Airflow. Analytics tools sourced by the data lake include Tableau, SAS JMP, Excel, Superset, Jupyter Notebooks and APIs for scripting (e.g. using Python, R etc.).
To support additional user capabilities, we need a consultant skilled in use case-driven design and development of (web-based) front-end applications. Users range from standard “point and click” personas to data scientists.
- Proven track record with JS(X), React, Django REST
- Programming experience with Python 3, Flask
- SQL-database experience (preferably PostgreSQL), and ETL/ELT-work experience
- Development using Linux, Git, Docker, CI/CD (we use Jenkins)
- Fast prototyping and agile development mindset – while developing to a production environment
- Writing production quality and testable code (and the tests to go along with it)
- Code review skills (we use a workflow based heavily on merge/pull requests)
- Experience with user storying / use cases and ability to write technical/user documentation when needed
- Big picture understanding with attention to details
- Biology/biotechnology/bioinformatics domain knowledge
- Experience with other programming languages, e.g. Java, Scala
- RabbitMQ, experience building pipelines
- System architecture and development
- Distributed computing (HPC)
Min. 5 years of professional IT experience.