Please mention Visa status and location
Job Description
We are looking for an AWS Data Engineer with 7+ years of experience to support the development and maintenance of scalable data pipelines and ETL processes. The candidate will work with AWS services, Python, and SQL to help build data solutions that support analytics and reporting.
Key Responsibilities
-
Develop and maintain ETL pipelines using Python and PySpark on AWS platforms such as AWS Glue.
-
Support data workflow orchestration using AWS Step Functions and serverless components like AWS Lambda.
-
Work with AWS messaging services (SNS, SQS) to support event-driven data processing.
-
Assist in designing and maintaining data storage solutions using Amazon Redshift.
-
Write and optimize SQL queries for data transformation, validation, and reporting.
-
Monitor data pipelines and help troubleshoot issues related to data quality and performance.
-
Work closely with data engineers, analysts, and stakeholders to understand data requirements and implement solutions.
-
Follow best practices for code management, documentation, and CI/CD processes.
Required Skills
-
7+ years of experience in data engineering, ETL development, or related roles.
-
Strong programming experience with Python and PySpark for data processing.
-
Hands-on experience with AWS services such as Glue, Lambda, SNS, SQS, Redshift, and Step Functions.
-
Good knowledge of SQL and relational data modeling.
-
Understanding of ETL processes, data quality checks, and pipeline monitoring.
-
Familiarity with Git version control and basic CI/CD practices.
—
PFB Mentioned requirement and share profile at