Top 50 big data engineer Remote and onsite quick overview and apply online

big data engineer jobs across US remote and onsite jobs with top notch clients

BIgdata Developer

LOCATION:  Baltimore, MD

Role: Python – Big Data

Job Details:
Must Have Skills:
Python and bash shell
REST API with SAML authentication
Experience with docker containers


Detailed Job Description:
Set up the required integrations in line with the target architecture design to acquire existing metadata, user profiles or credentials
Develop Rest API based integrations for integrating Data. World with Solari, Compass, Salesforce, SSO and Vault
Automation of jobs utilizing Python and bash shell
Experience with REST API with SAML authentication Apigee a big plus
Experience in integrating metadata from multiple data base technologies with focus on PostgreSQL, Snowflake, MySQL, Oracle, SQL Server, AWS Glue Catalog, AWS S3, Business Intelligence Tools such as Power BI, MicroStrategy, Tableau, ETLELT tools such as DBT, Spark, etc.
Experience with docker containers
Experience with AWS Services with focus on ECS Faregate containers or Lambda

Position : Bigdata Developer

Location:  – NYC,NY (Onsite role only)

Contract Job

Experience : 10+ years

 Persistent

Experience/Responsibilities

• Full Stack Developer with specialized focus on document management, automation and system integration

• Developing document discovery leveraging graph analytics, document classification, document inventory and media management on the back end and modern web app platform (react preferred) for the frontend.

• Modern Web App framework experience (i.e., React)

• Java and Python experience required

• Big Data experience

• Ability to work on a small agile team delivering services and experiences that automate the collection of documents and media from heterogenous systems into a unified experience.

• Documents will be packaged, digitally signed and pushed down stream for consumption

• Experience in developing microservices and familiarity with both open source and Microsoft dev stack of products.

• Key items in the stack: Spark, Elasticsearch, Cloudera, Redis and Kubernetes

• 5 years + relevant experience

big data jobs

Position: Bigdata Developer

Exp: 8+ years

Loc: CA (100% remote)

Skills:

•        BS degree in computer science, computer engineering or equivalent

•        7-8 years of experience delivering enterprise software solutions

•        Proficient in Spark, Scala, Python, AWS Cloud technologies

•        3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala

•        Flair for data, schema, data model, how to bring efficiency in big data related life cycle

•        Must be able to quickly understand technical and business requirements and can translate them into technical implementations

•        Experience with Agile Development methodologies

•        Experience with data ingestion and transformation

•        Solid understanding of secure application development methodologies

•        Experienced in developing microservices using spring framework is a plus

•        Experience in with Airflow and Python will be preferred

•        Understanding of automated QA needs related to Big dataStrong object-oriented design and analysis skills

big data engineer

Role       : Bigdata developer / Engineer (9+ Years)
Location: Initially remote after 2-3 months

Duration : 12+ Months

Job description

As a member of our Software Engineering Group, we look first and foremost for people who are passionate around solving business problems through innovation and engineering practices. You’ll be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You’ll work in a collaborative, trusting, thought-provoking environment—one that encourages diversity of thought and creative solutions that are in the best interests of our customers globally.

This role requires a wide variety of strengths and capabilities, including:

Bachelor’s degree in Information Technology, Computer Science, Computer Engineering, or related field of study

Six (6) years of experience in modern programming languages like Java / Scala, Python

Big Data Hadoop ecosystem technology stacks as HDFS, Spark, Hive, MapReduce etc. 

Hands-on experience with two or more AWS data lake and/or analytics technologies like S3, IAM, Athena, EMR, Redshift, QuickSight, AWS Glue, DataBrew, Apache Airflow

Knowledge of version control tools like Bitbucket and job scheduling and code deployment tools such as Jenkins, CI/CD Pipelines

Working proficiency in developmental toolsets such as IntelliJ, use of JUnits

Proficiency in databases like Oracle, Teradata

Hands on experience on data mart design & development

Advanced knowledge of application, data and infrastructure architecture disciplines

Ability to work in large, collaborative teams to achieve organizational goals

Role        : Big Data Developer

Location : Dallas , TX || Chicago , IL

Duration : Long Term 

Experience: 

Job Description- 

Please share profiles for below mentioned requirements-

Experience Required · Hands-on experience in Scala/Big Data technologies (Spark/Flink, Hive, Oozie, Hbase/Cassandra/MongoDB, Redis, YARN, Kafka) · Experience with Java/Scala programming language · Experience with shell scripting · Experience with ETL job design using Nifi/Streamsets/Talend/Pentaho/etc. · Working experience on multiple Hadoop/Spark based projects which have been deployed on Production. · Knowledge of Lambda/Kappa architectures · Experience on performance tuning of Hive and Spark jobs · Basic experience in data modelling for Hive and NoSQL (partitioning, bucketing, row key design, etc.) · Experience in debugging/troubleshooting Hadoop and Spark jobs · Maven build · Deployment of Hadoop/Spark jobs on production Good to have: · Experience with migration of data from datawarehouse to Hadoop based Data Lake · Knowledge of Datawarehousing concepts such as Facts, Dimensions, SCD, Star schema, etc. · Experience or at least good conceptual knowledge of Docker, Kubernetes, Microservices. · Experience in working on a Hadoop/Spark cluster on a Cloud platform (Google/Amazon/IBM cloud)

Roles & Responsibilities Data Lake implementation, off-boarding of data from existing Datawarehouse (Teradata) to Data Lake and migrating the existing ETL

jobs using Hadoop/Spark to populate the data into Data Lake instead of existing Data Warehouse.

Technical/Functional Skills ETL jobs development using Scala/BigData/HadoopHive/Spark/Flink, NoSQL (Hbase/Cassandra/MongoDB), Oozie workflows, Redis cache, YARN resource manager, Shell scripting, Java/Scala programming, Distributed Messaging system (Kafka),Java/Scala programming, Debugging/troubleshooting of Hadoop, Spark and Oozie jobs, Performance tuning experience for Hadoop/Spark jobs Good-to-Have – Experience working on cloud platform (Google/Amazon/IBM), Data warehousing knowledge, HealthCare domain knowledge, Knowledge and implementation of Lambda/Kappa architecture, knowledge on Microservices, Docker, Kubernetes, etc. good to have Teradata to Data Lake migration experience

top 100 it staffing companies in usa

Big data Engineer remote jobs

Updated bench sales hotlist

US IT recruiter vendor list

List of direct clients in USA

More Corp to corp hotlist

us bench sales hotlist

Join linkedin 25000+ US Active recruiters Network

Leave a Reply

Your email address will not be published. Required fields are marked *