*** CAPITAL ONE EXPERIENCE IS REQUIRED…**** THIS IS A MUST….
Former Capital One
Hybrid 3-4 days in McLean
Key Responsibilities:
- Design, build, and optimize ETL pipelines using Python, PySpark, and Spark
- Develop scalable data solutions leveraging Databricks, AWS Glue, EMR, and S3
- Collaborate with cross-functional engineering and analytics teams to implement best practices in data ingestion, transformation, and storage
- Support data quality, performance tuning, and process automation across the data lifecycle
- Work in Agile environments with CI/CD and version control tools
Required Skills and Experience:
- 3 to 7 plus years of experience in data engineering, preferably in cloud-based environments
- Strong proficiency in Python, PySpark, Spark, and SQL
- Hands-on experience with AWS data services (S3, Glue, EMR, Redshift, Lambda, Athena)
- Experience with Databricks or equivalent data lake platforms
- Familiarity with modern DevOps practices (Git, Jenkins, Terraform, Airflow, etc.)
Thanks and Regard,
Joseph Adams