Position: Data Engineer
Location: Issaquah, WA (Fully Onsite)
Terms: Long-Term-Contract
• Will be responsible for designing, building, and maintaining scalable, high-performance data pipelines and integration solutions using Python and Google Cloud Platform (GP) services.
This role requires a hands-on engineer with strong expertise in data architecture, ETL/ELT development, and real-time/batch data processing, who can collaborate closely with analytics, development, and Devops teams to ensure reliable, secure, and efficient data delivery across the organization.
Key Responsibilities
• Design, develop, and maintain data pipelines and ETL workflows using Python, Apache Beam, and GCP services such as Dataflow, Bigger, and Pub/Sub.
• Build and optimize data models, schemas, and data architectures for analytical and operational workloads.
• Develop and maintain batch and real-time data ingestion pipelines using APIs, BetSAR (coc), and cloud Composer • implement data transformation and data quality frameworks to ensure accuracy, reliability, and consistency. •Collaborate with data analysts, data scientists, and business teams to deliver reliable datasets and analytical tools. • Monitor and troubleshoot data pipelines, ensuring availability, reliability, and scalability of systems. •Develop C/CD pipelines for data workflows using GitHub, Terraform, and Cloud Build. •Perform performance tuning, root cause analysis, and system optimization to improve data flow efficiency
•Work closely with Devops and security teams to maintain compliance, security, and governance across data systems.
• Stay current with emerging technologies, tools, and best practices in data engineering and cloud computing
Required Skills & Qualifications
• Strong programming skills in Python with experience in data pipeline development and automation
Hands-on experience with GCP services, including: Biggyer, Dataflow, Batagres, Cloud Functions, cloud Composer, Cloud Scheduler, 93T#346210 (CDC), Bub/Sub, and GCS.
Proficiency in Apache Spark or similar distributed data processing frameworks.
• Strong SOL skills and understanding of relational and cloud-native databases.
•Experience developing REST API-based ingestion pipelines and ISON-based integrations. Familiarity with DevOps and CI/CD tools (GitHub, Terraform, Cloud Build). • Knowledge of data warehousing concepts, data magelinE, and ETL/ELT design principles. Understanding of data security, access control, and compliance integration in cloud environments. •Excellent problem-solving, analytical, and communication skills. Experience with Shell or Perl scripting is a plus.
Thanks & Regards,
Mayank Jaiswal| Senior Talent Acquisition Specialist
Amaze Systems Inc
USA: 8951 Cypress Waters Blvd, Suite 160, Dallas, TX 75019
Canada: 55 York Street, Suite 401, Toronto, ON M5J 1R7