Data Engineer
Position:- Spark Tech Lead (Big Data Engineer)
Client: Hexaware
End Client:- Freddie Mac.
Location:- McLean, VA. (Day-1 Onsite| 3 days Onsite & 2 Days remote) (Only EST Zone candidates)
Duration long term
NOTE:- Find Strong Bigdata resume with Strong Spark experience, Less Java experience will also work. But person must be senior level. Please don’t submit ETL background candidates.
Job Description:
Mandatory:
- 10+ yearsof experience in solution, design and development of applications using Java 8+/J2EE, Spring, Spring-Boot, Micro Services, RESTful Services and with experience in Big Data and with experience working in heavy data background needed.
- Develop, program, and maintain applications using the Apache Spark open-source framework.
- Work with different aspects of the Spark ecosystem, including Spark SQL, DataFrames, Datasets, and streaming
- Spark Developer must have strong programming skills in Java, Scala, or Python
- Familiar with big data processing tools and techniques
- Proven experience as a Spark Developer or a related role
- Strong programming skills in Java, Scala, or Python.
- Familiarity with big data processing tools and techniques
- Experience with the Hadoop ecosystem
- Good understanding of distributed systems
- Experience with streaming data platforms
- Must have strong experience in Big Data and with experience working in heavy data background needed
- Must be strong in Cloud AWS event-based architecture, Kubernetes, ELK (Elasticsearch, Logstash & Kibana)
- Must have excellent experience in designing and Implementing cloud-based solutions in various AWS Services (: s3, Lambda, Step Function, AMQ, SNS, SQS, CloudWatch Events, etc.)
- Must be well experienced in design and development of Microservice using Spring-Boot and REST API and with GraphQL
- Must have solid knowledge and experience in NoSQL (MongoDB)
- Good knowledge and experience in any Queue based implementations
- Strong knowledge/experience in ORM Framework – JPA / Hibernate
- Good knowledge in technical concepts – Security, Transaction, Monitoring, Performance
- Should we well versed with TDD/ATDD
- Should have experience on Java, Python and Spark
- 2+ years of experience in designing and Implementing cloud-based solutions in various AWS Services
- Strong experience in DevOps tool chain (Jenkins, Artifactory, Ansible/Chef/Puppet/Spinnaker, Maven/Gradle, Atlassian Tool suite)
- Very Good knowledge and experience in Non-Functional (Technical) Requirements like Security, Transaction, Performance, etc.
- Excellent analytical and problem-solving skills
Nice to have:
- Experience in Experience with OAuth implementation using Ping Identity
- Experience in API Management (Apigee) or Service Mesh (Istio)
- Good knowledge and experience in Queue/Topic (Active-MQ) based implementations.
- Good knowledge and experience in Scheduler and Batch Jobs
- Experience with scripting languages using Unix
- Preferably certified in AWS
Regards,
Himank Jani
Technical Recruiter
image001.png@01D68B3B.A83427D0” style=”width:1.157in;height:0.398in;margin-top:0px;margin-bottom:0px” src=”https://corptocorp.org/wp-content/uploads/2023/12/Outlook-cid_image0-1.jpeg”>
ApTask is a global, diversity certified staffing and recruiting company that specializes in IT, finance and accounting, and blockchain developer placements.
Fintech Consulting LLC DBA ApTask
Connect: (551) 277-3720 | himankj@aptask.com
Read more:
top 10 staffing companies in usa
More Corp to corp hotlist
Join linkedin 42000+ US Active recruiters Network
Join No.1 Telegram channel for daily US JOBS and Updated HOTLIST