Job Description
- The ideal candidate will have experience in designing, building, and maintaining scalable and efficient data pipelines and data storage solutions.
- As a Data Engineer, you will be responsible for developing, deploying, and maintaining ETL pipelines, data models, and data governance policies to ensure data consistency, accuracy, and availability.
Key Responsibilities:
- Design and implement data processing and storage solutions using AWS services such as S3, Glue, Athena, Redshift, and EMR.
Must Have:
- Experience in building data pipelines for Financial Services or similar projects on the AWS ecosystem.
- Minimum of 3 years of experience in data engineering or related roles.
- Experience with measuring data quality.
- Experience with AWS services such as S3, Glue, Athena, Redshift, and EMR.
- Expert knowledge in Python, Pyspark, SQL, and code management (git).
- Strong ETL Skills, experience with data management systems and SQL tuning.
- Experience with data modeling, data warehousing, and data governance.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration skills.
Good to have:
- AWS certifications such as AWS Certified Big Data – Specialty or AWS Certified Data Analytics – Specialty.
- Experience with containerization technologies such as Docker and Kubernetes.
- Experience working with Airflow, Prefect, dbt, and other related tools.
- Experience with stream processing technologies such as Kafka and Kinesis.
- Experience with NoSQL databases such as MongoDB and Cassandra.
- Knowledge of Data Security and Privacy.
Benefits
This job is perfect for you if you
- Are creative and an out-of-the-box thinker
- Have excellent execution skills and are passionate about achieving excellence
- Enjoy analytical thinking and have problem-solving capabilities
- Enjoy collaborating with others, building relationships
How to Apply
Interested and qualified? Go to Renmoney on renmoney.zohorecruit.com to apply