· Collect, clean, prepare and load the necessary data - structured or unstructured - onto Hadoop, our Big Data analytics platform, so that they can be used by the data scientists to create insights and answer business challenges
· Act as a liaison between the team and other stakeholders, and contribute to support the Hadoop cluster and the compatibility of all the different software that run on the platform (Spark, R, Python, …)
· Experiment new tools and technologies related to data extraction, exploration or processing
· Depending on his / her skills, the new data engineer may also be involved in the analytical aspects of data science projects
· Identify the most appropriate data sources to use for a given purpose and understand their structures and contents, if necessary with the help of SMEs
· Extract structured and unstructured data from the source systems (relational databases, data warehouses, document repositories, file systems, …), prepare such data (cleanse, re-structure, aggregate, …) and load them onto Hadoop.
· Actively support data scientists in the data exploration and data preparation phases. Where data quality issues are detected, liaise with the data supplier to do root cause analysis
· Where a use case is meant to become a production application, contribute to the design, build and launch activities
· Ensure the maintenance and support of production applications (watch duty)
· Liaise with CT teams to address infrastructure issues and to ensure that the components and software used of the platform are all consistent
· Where the skills allow for it, perform advanced data analysis on a selection of business use cases, supported by data scientists
· Experience with understanding and creating data flows, with data architecture, with ETL/ELT development (MS SQL Server SSIS, Datastage, … ) and with processing structured and unstructured data
· Proven experience with using data stored in RDBMSs and experience or good understanding of NoSQL databases
· Ability to write performant SQL statements
· Understanding of the Hadoop ecosystem including Hadoop file formats like Parquet and ORC
· Very good knowledge of Spark & Scala
· Ability to write MapReduce & Spark jobs
· Experience with open source technologies used in Big Data analytics like Pig, Hive, HBase, Kafka, …
· Ability to analyze data, to identify issues like gaps and inconsistencies and to do root cause analysis
· Experience in working with customers to identify and clarify requirements
· Ability to design solutions that are fit for purpose whilst keeping options open for future needs
· Strong verbal and written communication skills, good customer relationship skills
Will be considered as assets
· Knowledge of Cloudera
· Experience with Linux and Shell scripting
· Knowledge of Java
· Knowledge of IBM mainframe and DB2
· Knowledge of or experience in classic and new/emerging business intelligence methodologies
· Knowledge of statistics, data mining, machine learning and predictive modeling, data visualization and information discovery techniques
· A challenging position in a fast growing company with an international presence.
· A stimulating working environment with a really good team spirit maintained by lots of internal events (teambuilding, ...).
· A dynamic culture focused on personal development.
· A wide range of training and career development opportunities.
Please apply now !