Role: Sr. PySpark Developer
Location: Reading, PA (Hybrid 2-3 days)
Duration: 12 Months
Job description:
- 6-8 Years Industry Experience
- Develop and optimize data processing workflows using PySpark Unix Spark in Scala.
- Ensure high performance and reliability of data pipelines.
- Collaborate with cross-functional teams to integrate data solutions with existing systems.
- Implement best practices for data storage and retrieval using Amazon S3.
- Conduct code reviews and provide constructive feedback to team members.
- Troubleshoot and resolve issues related to data processing and storage.
- Design and implement scalable data architectures.
- Contribute to the continuous improvement of development processes.
- Ensure compliance with data security and privacy regulations.
- Possess strong expertise in PySpark Unix Spark in Scala.
- Have extensive experience with Amazon S3 for data storage and retrieval.
- Show proficiency in troubleshooting and resolving data-related issues.
- Exhibit excellent collaboration and communication skills.
- Have a proven track record of designing scalable data architectures.
Awaiting your quick response. Thanks!
P.S. Empower is a top vendor to clients such as Apex Systems LLC, Sogeti, Randstad, Capgemini, UST and more.
Thanks
Mayank Verma
Technical Recruiter | Empower Professionals
……………………………………………………………………………………………………………………..
mayank@empowerprofessionals.com | LinkedIn: https://www.linkedin.com/in/mayankdverma/
Fax: 732-356-8009 | 100 Franklin Square Drive – Suite 104 | Somerset, NJ 08873
—