Website Zenmid Sols LLC
Need Locals only
We’re looking for a battle-tested Principal or Senior Data Engineer with deep expertise in Azure, Databricks, PySpark, Python, SQL, data warehousing, and building data products. If you’ve architected scalable lakehouses and optimized massive pipelines, this is for you.
Core Tech Stack & Hands-On Skills
Azure Cloud Mastery: ADLS Gen2, ADF, Synapse, Azure SQL, Key Vault, App Services. You design cloud-native architectures, optimize costs, and nail security/governance (RBAC, Managed Identity, Private Endpoints).
Databricks Wizardry: Full lifecycle dev on Azure Databricks—Lakehouse builds, Delta Live Tables (DLT), Unity Catalog, performance tuning, cluster optimization, and CI/CD pipelines.
Big Data Powerhouse: PySpark for advanced transformations and optimization; Spark Structured Streaming + batch jobs; Delta Lake deep dives.
Coding & Query Superpowers: Python for automation and engineering; complex SQL tuning; analytics engineering.
Data Architecture & Engineering Prowess
Enterprise EDW design (Star/Snowflake, Data Vault 2.0), metadata-driven ingestion, CDC, Medallion layers (Bronze/Silver/Gold), data lineage/cataloging, MDM, and hybrid Lakehouse/EDW setups.
Data Products & Analytics
Build scalable data products with business-aligned semantic layers, KPI frameworks, enterprise reporting, and integrations from ERP/SaaS/ops systems.
DevOps & Best Practices
CI/CD with Azure DevOps/GitHub Actions/Bitbucket; IaC via Terraform/ARM; automated testing (unit/integration/data quality); monitoring/alerting; Agile/Scrum fluency.
Strategic Impact & Advanced Edge
Principal-level architecture and roadmaps; stakeholder collaboration.
Optimize massive distributed systems for performance.
Data governance (GDPR/SOX); cloud cost strategies.
Lead migrations from legacy EDW (Teradata/Oracle/SQL Server) to modern Lakehouse.
To apply for this job email your details to paritosh.sood@zenmidsols.com