Job description/tech stack:
We are looking for a proficient Azure Data Engineer to design, build, and maintain scalable data pipelines and architectures on the Azure cloud platform. The ideal candidate will have hands-on experience with data engineering tools and technologies including Python, SQL, Postgres, MongoDB, PySpark, Databricks, and Snowflake. You will collaborate with data scientists, analysts, and business stakeholders to deliver high-quality, performant data solutions that enable data-driven decision-making.
Key Responsibilities
· Design, develop, and optimize end-to-end data pipelines and ETL/ELT processes leveraging Azure Data services and frameworks.
· Build scalable data solutions using Azure Databricks, PySpark, and Snowflake to process both batch and real-time data workloads.
· Develop and maintain data models and schemas in relational and NoSQL databases such as Postgres and MongoDB.
· Write efficient, reusable, and maintainable code primarily in Python and SQL to transform and load data across various systems.
· Collaborate with cross-functional teams including data scientists, analysts, and business users to gather requirements and deliver data solutions that meet business needs.
· Monitor data pipeline performance and implement improvements for reliability, scalability, and optimization. · Ensure data quality, governance, and compliance within all data engineering efforts. · Troubleshoot and resolve data-related issues, working closely with cloud infrastructure and platform teams. · Document data architecture, workflows, and processes to support ongoing maintenance and knowledge sharing.
We are looking for a proficient Azure Data Engineer to design, build, and maintain scalable data pipelines and architectures on the Azure cloud platform. The ideal candidate will have hands-on experience with data engineering tools and technologies including Python, SQL, Postgres, MongoDB, PySpark, Databricks, and Snowflake. You will collaborate with data scientists, analysts, and business stakeholders to deliver high-quality, performant data solutions that enable data-driven decision-making.
Key Responsibilities
· Design, develop, and optimize end-to-end data pipelines and ETL/ELT processes leveraging Azure Data services and frameworks.
· Build scalable data solutions using Azure Databricks, PySpark, and Snowflake to process both batch and real-time data workloads.
· Develop and maintain data models and schemas in relational and NoSQL databases such as Postgres and MongoDB.
· Write efficient, reusable, and maintainable code primarily in Python and SQL to transform and load data across various systems.
· Collaborate with cross-functional teams including data scientists, analysts, and business users to gather requirements and deliver data solutions that meet business needs.
· Monitor data pipeline performance and implement improvements for reliability, scalability, and optimization. · Ensure data quality, governance, and compliance within all data engineering efforts. · Troubleshoot and resolve data-related issues, working closely with cloud infrastructure and platform teams. · Document data architecture, workflows, and processes to support ongoing maintenance and knowledge sharing.
—