Role: Data Engineer with strong AWS
Location: NYC, NY (Hybrid, locals only)
Duration: 12+ Months
Requirements:
• AWS cloud in S3/ATHENA/GLUE/EMR/VPC.
• Creating and Managing Data Pipelines and Streamsets in KAFKA
• 5+ yrs of Proven experience as a PySpark Developer or a related role
• Strong programming skills in Scala, Spark or Python
• Experience in working with AWS cloud and AWS DocuDB
• Familiarity with big data processing tools and techniques
• Experience with streaming data platforms
• Excellent analytical and problem-solving skills
• Strong knowledge in Kafka topics and K-SQL
• Extensive Unix/FTP/ File Handling
• Strong Hands-on in No SQL databases like Mongo DB
• Experience with Agile Methodology/JIRA/Confluence
• Hands-on in Handling all Processors.