Big Data Developer  :: Richmond VA​​​​​​​

06/22/20 2:01 PM

Greetings,

We have an opening for . Below are the requirement details, just go through it & if you feel interested, revert back to me with your updated resume Your earliest reply is highly appreciated.

Should be strong in Programming Python & Scala Coding & at least 1 year of AWS EXP.
 
Job Role  :- Big Data Developer 
Location : Richmond VA
Client :- Cognizant/ Capital One

Position Open: 24
 
( we have a combination of Python/ Spark/AWS and Spark/Scala/AWS roles.  Do a thorough due diligence before sharing and no fake/ junior candidates.
Let candidates know there would be a coding test at client interview.

 

Skills Rating Out Of 10 Experience in Years
Spark    
Python    
Scala    
AWS    
Snowflake    

 
 
 

Exact Job Location/Work Address Richmond, VA
Project Duration (relevant for CWR) 7+ Months
Required Technologies
  • Strong Programming experience with object-oriented/object function scripting languages: Python.
  • Experience with big data tools: Hadoop, Apache Spark etc.
  • Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. (Nice to have)
  • Experience with relational SQL, Snowflake and NoSQL databases

 

Job Description: Detailed overview of functional and technical role expectations Candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have working experience using the following software/tools:
 

  • 3+ years of experience (Mid-level) Strong Programming experience with object-oriented/object function scripting languages: Python
  • 3+ years of experience (Mid-level) Experience with big data tools: Hadoop, Apache Spark, Kafka, etc
  • 1+ years of experience Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. (Nice to have)
  • 1+ Years of experience Experience with relational SQL, Snowflake and NoSQL databases, including Postgres and Cassandra.

 
 
 
Responsibilities for Data Engineer:
 

  • Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
  •  Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability etc.
  •  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘Big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  •  Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

 

Preferred Skills Python, Hadoop, Apache Spark, AWS, Snowflake/SQL knowledge
Years of Experience Required 6+

Note: if you know of someone who may be a good fit, please forward this to them

Thanks,
Next Level Business Services, Inc.
Consulting| Analytics| Staff Augmentation

RiteshSharma
11340 Lakefield Drive Suite #200,
Johns Creek, GA, 30097
(904) 236-5131
ritesh.sharma@nlbservices.com

If you would prefer to no longer receive any emails from this Company, you may opt out at anytime by clicking here.

Leave a Reply