Brak wyników spełniających kryteria wyszukiwania.

Senior Data Engineer AWS & Databricks
Holisticon InsightUnia Europejska
The project:
We’re looking for a Senior Data Engineer to join a team supporting our client — a leading player in the automotive industry, currently driving a large-scale initiative to build a unified data platform that will shape how connected vehicle data is used across Europe. The goal is to create a modern, scalable platform (built on Databricks on AWS) that consolidates client's vehicle data from multiple sources into one ecosystem, enabling analytics, AI/ML innovation, and data-driven decision-making. You’ll be part of an international team working with data mesh and lakehouse principles, helping design, develop, and maintain a platform that ensures data quality, security, and interoperability — while unlocking new business value for the client’s connected vehicle services.
📌 In the role of Data Engineer you will:
- Design, build, and maintain robust data ingestion pipelines from multiple vehicle data sources.
- Develop and optimize data transformation workflows and orchestration logic using modern data tools.
- Design and implement complex ETL/ELT processes, ensuring high data quality and performance.
- Troubleshoot and resolve pipeline failures and performance bottlenecks.
- Ensure data integrity, consistency, and accuracy across all stages of the data lifecycle.
- Collaborate with business and IT teams to define and deliver data requirements and specifications.
- Prepare and maintain detailed documentation of data sources, ingestion logic, and transformation processes.
- Regularly monitor and report on data quality and platform performance.
- Mentor and support other team members, contributing to best practices and a data-driven culture.
Requirements:
- 5+ years of hands-on experience in data engineering using Python and SQL.
- Strong experience working with AWS and data warehousing services (e.g. Redshift, Databricks, or similar).
- Expertise in ETL/ELT and data orchestration frameworks (e.g. PySpark, Airflow, AWS Glue).
- Experience with real-time or stream processing (e.g. Kafka, Flink, or Spark Structured Streaming).
- Solid understanding of data lake and data warehouse architecture and best practices.
- Skilled in Git and familiar with CI/CD pipelines for data projects.
- Strong background in data analysis and familiarity with tools like Power BI.
- Proven ability to gather, document, and translate business requirements into technical solutions.
- Collaborative mindset, attention to detail, and a passion for building scalable, high-quality data systems.
Benefits:
- Life insurance
- Multisport card
- Fully remote job
- Private medical care
- Flexible working hours
- B2B contract
- Amazing integration events on a regular basis
- Training budget
- Opportunity to impact our company culture build-up
- Work equipment (laptop, 2 monitors, and accessories)
Interview process:
- Initial interview with Paulina, our recruiter - 30 minutes
- Technical interview with one of our technical interviewers - 30-45 minutes
- Interview with the client - 1 hour
Zainteresowany ofertą?
Aplikuj już teraz!