Data Engineer with databricks and snowflake / Remote 15 years exp mandatory

Data Engineer

Location // Remote

Contract

 

 

Detailed JD:

Role Summary

We are seeking an experienced Data Engineer to design, build, and optimize scalable, high-performance data pipelines using Databrick The role involves end-to-end ownership of data ingestion, transformation, orchestration, and optimization across cloud-based data platf

Key Responsibilities

Data Engineering & Pipeline Development

* Design, develop, and maintain batch and streaming data pipelines using Databricks (PySpark) and Snowflake.

* Build ETL / ELT frameworks to ingest data from multiple sources (RDBMS, APls, flat files, cloud storage).

* Implement data transformation logic using Python and 5QL for scalable and high-volume datasets.

* Develop metadata-driven and reusable pipelines following enterprise data engineering best practices.

Workflow Orchestration

* Create and manage complex workfiows using Apache Airflow.

* Implement scheduling, dependency management, retries, alerts, and failure handling.

* Integrate Airflow with Databricks jobs, Snowflake tasks, and cloud services.

Databricks & Lakehouse Architecture

* Work on Databricks Lakehouse architecture including Bronze / Silver / Gold (Medallion) layers.

* Optimize Spark jobs using partitioning, caching, broadcast joins, and performance tuning.

* Manage Databricks jobs, clusters, notebooks, and workspace configurations.

Snowflake Development

* Design and optimize Snowflake schemas, tables, viewis, and warehouses.

* Implement Snowflake 5QL transformations, performance tuning, and cost optimization.

* Work with Snowflake features such as Time Travel, Cloning, Teaks, Streams (where applicable).

Data Quality, Governance & Security

* Implement data quality checks, validation frameworks, and reconciliation logic.

* Ensure adherence to data governance; security, and compliance requirements.

* Collaborate with governance teams on metadata, lineage, and access controls.

CI/CD 8. Operations

* Implement CI/CD pipelines for data code using Git-based version control systems.

* Support production deployments, monitoring, and incident resolution.

 



— 

:

:
:
:
    
🔔 Get our daily C2C jobs / Hotlist notifications on 

WHATSAPP              TELEGRAM                  LINKEDIN
   

About Author

I’m Monica Kerry, a passionate SEO and Digital Marketing Specialist with over 9 years of experience helping businesses grow their online presence. From SEO strategy, keyword research, content optimization, and link building to social media marketing and PPC campaigns, I specialize in driving organic traffic, boosting rankings, and increasing conversions. My mission is to empower brands with result-oriented digital marketing solutions that deliver measurable success.

Leave a Reply

Your email address will not be published. Required fields are marked *

×

Post your C2C job instantly

Quick & easy posting in 10 seconds

Keep it concise - you can add details later
Please use your company/professional email address
Simple math question to prevent spam