Get C2C/W2 Jobs & hotlist update

Senior Data Engineer – PySpark and Databricks in New York City, NY

Job Title: Senior Data Engineer – PySpark and Databricks

Location: New York City, NY (Hybrid – 3 Days Onsite)
Experience Required: Minimum 12+ Years (Strict Requirement)
Local / Nearby NYC Candidates Only
LinkedIn ID Required for Submission

Job Summary
We are seeking a highly experienced Senior Data Engineer with strong hands-on expertise in PySpark and Databricks to support large-scale data engineering initiatives for the Federal Reserve Board (FRB). The ideal candidate will have a deep background in building, optimizing, and modernizing enterprise data pipelines and distributed processing systems in a cloud environment.
This role requires 12+ years of experience in data engineering, strong technical leadership, and the ability to collaborate with cross-functional teams. Candidates must be able to work onsite 3 days each week in NYC.

Responsibilities:
Data Engineering & Development
Design, build, and maintain scalable, high-performance data pipelines using PySpark and Databricks.
Develop and optimize ETL/ELT processes for structured and unstructured datasets.
Build and enhance data ingestion frameworks, streaming pipelines, and batch workflows.
Databricks & Spark Optimization
Utilize Databricks notebooks, Delta Lake, and Spark SQL for data transformations.
Optimize PySpark jobs for performance, cost-efficiency, and scalability.
Troubleshoot Spark performance issues and implement best practices.
Data Architecture & Modeling
Work with architects to design data lake/lakehouse solutions.
Implement data modeling standards, schema management, and data quality frameworks.
Maintain and improve data governance, metadata, and lineage processes.
Collaboration & Delivery
Partner with data scientists, analysts, and business teams to support analytical requirements.
Translate business needs into technical solutions and deliver production-ready datasets.
Participate in Agile ceremonies, sprint planning, and code reviews.

Required Skills & Qualifications

Mandatory Requirements

12+ years of professional experience in Data Engineering (no exceptions).
Strong hands-on expertise in PySpark (advanced level).
Deep proficiency with Databricks (development + optimization).
Strong knowledge of Spark SQL, Delta Lake, and distributed data processing.
Solid experience in ETL/ELT design, large-scale data pipelines, and performance tuning.
Experience working in cloud environments (AWS, Azure, or GCP).
Excellent communication and documentation skills.

Preferred Skills
Prior experience in banking, finance, or federal organizations.
Experience with CI/CD tools (Git, Jenkins, Azure DevOps).
Knowledge of data governance, security, and compliance frameworks.

Additional Information
Work Mode: Hybrid – 3 days onsite in NYC (mandatory).
Only local or nearby candidates will be considered due to onsite requirement.
Excellent opportunity to work with a major federal client on high-impact data engineering initiatives.

Contact Information

Email: vineet.sharma@hire-in.com

Click the email address to contact the job poster directly.

About Author

I’m Monica Kerry, a passionate SEO and Digital Marketing Specialist with over 9 years of experience helping businesses grow their online presence. From SEO strategy, keyword research, content optimization, and link building to social media marketing and PPC campaigns, I specialize in driving organic traffic, boosting rankings, and increasing conversions. My mission is to empower brands with result-oriented digital marketing solutions that deliver measurable success.

Leave a Reply

Your email address will not be published. Required fields are marked *

×

Post your job instantly

Quick & easy C2C job posting in 10 seconds

Keep it concise - you can add details later
Please use your company/professional email address
Add your mobile number if you'd like to be contacted via SMS
Simple math question to prevent spam