Data Engineer/Senior Data Engineer

  • Kategoria
  • Rodzaj umowy
    Praca Stala
  • Lat doświadczenia
  • Lokalizacja

Adecco Poland Sp. z o.o. jest Agencją Zatrudnienia nr 364

Opis klienta:

Our client from automotive industry is looking for engineers to build the most captivating experiences in the latest frontier of Automotive Technology. You will be part of a global driven engineering organization with a modern state of the art agile culture. Collaboration is key.

Zakres odpowiedzialności:

  • Design and implement secure, scalable, high-performance and robust data services for connected vehicle data in distributed data processing platforms using modern Big Data technologies.
  • Design, extend, and review data architecture, model, flow, and integration – be hands on and involved with every stage of the software development life cycle.
  • Partner with software engineers and data scientists to meet the need of upstream and downstream dependencies.
  • Work with external carrier stakeholders and internal partner teams to ingest connectivity data for data processing.
  • Develop state of the art code – influence/establish the software development culture of the team. Establish standards and best practice for instrumentation within software engineering.
  • Keep up to date with the evolving Big Data technology, share the knowledge across the organization by enabling best practices, standards, governed processes and relevant technologies.

Profil kandydata:


  • BA/BSc in Computer Science, Engineering, Mathematics, or a related technical discipline preferred
  • 2+ years of experience in the data engineering and software development life cycle.
  • 2+ years of hands-on experience in building and maintaining production data applications, current experience in both relational and columnar data stores.
  • Experience with more than one coding language.
  • Experience with one or more functional languages such as F#, Scala or Haskell
  • Experience working with cloud Big Data platforms (i.e. Spark, Google BigQuery, Azure Data Warehouse, etc.)
  • Familiarity with time series database, data streaming applications, Kafka, Flink, and more
  • Experience with workflow management engines (i.e. Airflow, Luigi, Azure Data Factory, etc.)
  • Familiarity with modern data science and product analytics tools and techniques such as R, Machine Learning, and advanced statistics is a plus
  • Understand how to work with Hive metastores
  • Experience with continuous integration tools like Jenkins
  • Experience with creating containerized applications using Docker running them on Kubernetes.

We offer:

  • Contract of employment
  • Opportunity for professional development
  • Remote type of work
  • Participation in global projects