Duration: 12 Months
Location: New York City, NY
Primary Skills: Data-Snowflake with AWS Native Services, ETL
Description:
Warehouse and analytics data infrastructure.
This role focuses on ingesting operational data from source
systems (primarily PostgreSQL), transforming and modelling that data in the
data warehouse (currently Snowflake), and preparing reporting-ready datasets
and optimized queries for analytics and reporting services.
Responsibilities:
Design, develop, and maintain a scalable data warehouse
architecture to support analytics and reporting needs.
Build and manage data ingestion pipelines that move
operational data (from PostgreSQL) into the data warehouse (Snowflake) with
high reliability and data quality. Transform and model raw data into
reporting-friendly schemas (e.g., dimensional models, denormalized datasets, or
analytics-optimized Stakeholders to translate business requirements into
scalable data solutions.
Establish and promote data engineering best practices,
including naming conventions, documentation, testing, and performance
optimization.
Monitor data pipelines and warehouse performance,
proactively identifying and resolving data quality, latency, or scalability
issues. Contribute to architectural decisions around data modelling, ingestion patterns, and warehouse optimization.
Participate in agile development processes, additional tasks within the department as assigned by management.
Qualifications:
4-8 years of prof data modelling concepts (e.g., dimensional
modelling, star/snowflake schemas, reporting-optimized structures).
Familiarity with ELT/ETL concepts, data pipelines, and
orchestration practices. Experience ingesting and transforming data from
relational databases, particularly PostgreSQL.
Solid experience with Git-based version control for database
code and data transformations.
Experience supporting multiple environments (development,
staging, production) with controlled deployment processes. Strong problem-solving skills and ability
—