Data Engineer

  • Engineering
  • Remote job

Data Engineer

Job description

We support recruitment for a UK-based client who is a leading competitive intelligence service for search marketers. Their patented technologies provide unparalleled accuracy for marketers to understand their competitors’ strategies and to gain insights to outperform rivals. Now, they want to engage seasoned developers from Poland to help them elevate their solutions.


As a Data Engineer, your responsibilities will include helping with a vision for the future architecture of this complex data system, adding innovative ideas that use the latest cutting edge technology. You will work closely with Web and Data Science teams to deliver user-centric solutions to our customers. The data team is dealing with hundreds of millions of data points every day, generated from over two thousand data processes running through workflows, huge distributed computations in spark, streaming data coming in twenty-four hours a day at hundreds of times a second.


Key takeaways:

Stack: Scala, Java, Hadoop, Spark, AWS, PostgreSQL, Cassandra; (nice to have): RabbitMQ, Kafka, Chef / Puppet / Ansible, Luigi / Airflow

Salary: 17 000 – 20 000 PLN net B2B (possible UoP)

Location: 100% remotely (even after pandemic)

Recruitment process: 2 steps (around 2h) + technical test


Responsibilities:

  • Building services/features/libraries that serve as definitive examples for new engineers and making major contributions to library code or core services
  • Designing a low-risk Spark process and writing effective complex Spark jobs (data processes, aggregations, pipeline)
  • Design low-risk APIs and writing complex asynchronous, highly parallel low latency APIs and processes
  • Working as part of an Agile team to maintain, improve, monitor data collection processes using Java and Scala
  • Writing high quality, extensible and testable code by applying good engineering practices (TDD, SOLID)
  • Supporting the TA and Data Science team to help deliver and productionise their backlog/prototype

Requirements

  • Commercial experience developing Spark Jobs using Scala
  • Commercial experience using Java and Scala (Python nice to have)
  • Experience in data processing using traditional and distributed systems (Hadoop, Spark, AWS - S3)
  • Experience designing data models and data warehouses.
  • Experience in SQL, NoSQL database management systems (PostgreSQL and Cassandra)


Nice to have:

  • Commercial experience using messaging technologies (RabbitMQ, Kafka)
  • Experience using orchestration software (Chef, Puppet, Ansible, Salt)
  • Confident with building complex ETL workflows (Luigi, Airflow)
  • Good knowledge of working cloud technologies (AWS)
  • Good knowledge using monitoring software (ELK stack)


What you can expect:

  • Joining an engineering culture underpinned by sharing knowledge, coaching and growing together
  • Flexible work schedule
  • Possibility to work remotely 100% time (forever)
  • Private healthcare (Medicover), gym pass (Multisport), life insurance (voluntary benefits non-covered on B2B) 
  • Possibility of B2B contract


If you think you might thrive in the team, we would love to hear from you. Just hit the apply button!