Hi,
Hope you are doing well,
Role: AWS AI Data Engineer
Job Location:- Remote
Duration: 12+ Months Contract
Job Location:- Remote
Duration: 12+ Months Contract
Role Scope / Deliverables
We are looking for an experienced AWS AI Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures that support AI and Machine Learning (ML) workflows. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines, and working with advanced ML frameworks and tools. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively.
We are looking for an experienced AWS AI Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures that support AI and Machine Learning (ML) workflows. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines, and working with advanced ML frameworks and tools. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively.
Must Have Skills
1. Proficiency in programming languages such as Python, Scala, or similar.
2. Solid understanding of machine learning frameworks such as TensorFlow and PyTorch.
3. Strong experience in data classification, including the identification of PII data entities.
4. Knowledge and experience with retrieval-augmented generation (RAG) and agent-based workflows.
5. Deep understanding of how-to re-rank and improve LLM outputs using Index and Vector stores.
6. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data and AI-related challenges.
7. Ability to manage and deploy machine learning models and frameworks at scale using AWS infrastructure.
8. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering and AI/ML.
9. experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively.
10. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail.
Nice to Have skills:
1. Experience with data privacy and compliance requirements, especially related to PII data.
2. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of AI/ML outputs.
1. Proficiency in programming languages such as Python, Scala, or similar.
2. Solid understanding of machine learning frameworks such as TensorFlow and PyTorch.
3. Strong experience in data classification, including the identification of PII data entities.
4. Knowledge and experience with retrieval-augmented generation (RAG) and agent-based workflows.
5. Deep understanding of how-to re-rank and improve LLM outputs using Index and Vector stores.
6. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data and AI-related challenges.
7. Ability to manage and deploy machine learning models and frameworks at scale using AWS infrastructure.
8. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering and AI/ML.
9. experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively.
10. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail.
Nice to Have skills:
1. Experience with data privacy and compliance requirements, especially related to PII data.
2. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of AI/ML outputs.
To unsubscribe from future emails or to update your email preferences click here