Brak wyników spełniających kryteria wyszukiwania.

Senior Data Platform Engineer (Fintech)
RITS Professional ServicesAm. Płn.
About the Company
We collaborate exclusively with a stable US-based client, a global leader in electronic trading platforms that has operated for over 25 years. The company serves the world’s leading asset managers, central banks, hedge funds, and other institutional investors — facilitating around 30 trillion USD in trades every month across its electronic marketplaces.
About the Role
We are looking for a to help build and operate the core infrastructure that powers our data ecosystem. This role focuses on , rather than developing individual ETL pipelines. You will work on the , designing and maintaining distributed systems, data processing frameworks and the infrastructure that enables large-scale data processing. building tools, services and frameworks for data engineers and data scientists platform layer of the data stack You will collaborate closely with data engineers, data scientists and product teams to develop a used across the organization. reliable, scalable and production-grade data platform
Job responsibilities:
- Build and run data platform using such technologies as public cloud infrastructure (AWS and GCP), Kafka, Spark, databases and containers
- Develop data platform based on open source software and Cloud services
- Build and run ETL tools and frameworks to onboard data into the platform, define schema, build DAG processing pipelines and monitor data quality.
- Help develop machine learning development framework and pipelines
- Manage and run mission crucial production services.
Skills:
- Experience as Data Platform Engineering or SRE
- Strong software engineering experience and working with Python.
- Experience building ETL and stream processing tools and frameworks using Kafka, Spark, Flink, Airflow/Prefect, etc.
- Strong experience working with SQL and databases/engines such as MySQL, PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etc.
- Experience with using AWS/GCP (S3/GCS, EC2/GCE, IAM, etc.), Kubernetes and Linux in production.
- Strong proclivity for automation and DevOps practices
- Experience with managing increasing data volume, velocity and variety Agile, self-starter and is focused on getting things done.
- Strong communicator
Nice to have:
- Familiarity with data science stack: e.g. Jupyter, Pandas, Scikit-learn, Dask, Pytorch, MLFlow, Kubeflow, etc.
- Financial Services experience
Don't hesitate and apply now!
Zainteresowany ofertą?
Aplikuj już teraz!