Get C2C/W2 Jobs & hotlist update

: Issaquah, WA (Fully Onsite) :: LongContract

Position: Data Engineer
Location: Issaquah, WA (Fully Onsite)

Terms: Long-Term-Contract

 

• Will be responsible for designing, building, and maintaining scalable, high-performance data pipelines and integration solutions using Python and Google Cloud Platform (GP) services.
This role requires a hands-on engineer with strong expertise in data architecture, ETL/ELT development, and real-time/batch data processing, who can collaborate closely with analytics, development, and Devops teams to ensure reliable, secure, and efficient data delivery across the organization.

Key Responsibilities
• Design, develop, and maintain data pipelines and ETL workflows using Python, Apache Beam, and GCP services such as Dataflow, Bigger, and Pub/Sub.
• Build and optimize data models, schemas, and data architectures for analytical and operational workloads.
• Develop and maintain batch and real-time data ingestion pipelines using APIs, BetSAR (coc), and cloud Composer • implement data transformation and data quality frameworks to ensure accuracy, reliability, and consistency. •Collaborate with data analysts, data scientists, and business teams to deliver reliable datasets and analytical tools. • Monitor and troubleshoot data pipelines, ensuring availability, reliability, and scalability of systems. •Develop C/CD pipelines for data workflows using GitHub, Terraform, and Cloud Build. •Perform performance tuning, root cause analysis, and system optimization to improve data flow efficiency
•Work closely with Devops and security teams to maintain compliance, security, and governance across data systems.
• Stay current with emerging technologies, tools, and best practices in data engineering and cloud computing

Required Skills & Qualifications
• Strong programming skills in Python with experience in data pipeline development and automation
Hands-on experience with GCP services, including: Biggyer, Dataflow, Batagres, Cloud Functions, cloud Composer, Cloud Scheduler, 93T#346210 (CDC), Bub/Sub, and GCS.
Proficiency in Apache Spark or similar distributed data processing frameworks.
• Strong SOL skills and understanding of relational and cloud-native databases.
•Experience developing REST API-based ingestion pipelines and ISON-based integrations. Familiarity with DevOps and CI/CD tools (GitHub, Terraform, Cloud Build). • Knowledge of data warehousing concepts, data magelinE, and ETL/ELT design principles. Understanding of data security, access control, and compliance integration in cloud environments. •Excellent problem-solving, analytical, and communication skills. Experience with Shell or Perl scripting is a plus.

 

Thanks & Regards,

Mayank Jaiswal| Senior Talent Acquisition Specialist

Amaze Systems Inc

USA: 8951 Cypress Waters Blvd, Suite 160, Dallas, TX 75019

Canada: 55 York Street, Suite 401, Toronto, ON M5J 1R7

E: mayank.jaiswal@amaze-systems.com

:

:
:
:
    
🔔 Get our daily C2C jobs / Hotlist notifications on 

WHATSAPP              TELEGRAM                  LINKEDIN
   

About Author

I’m Monica Kerry, a passionate SEO and Digital Marketing Specialist with over 9 years of experience helping businesses grow their online presence. From SEO strategy, keyword research, content optimization, and link building to social media marketing and PPC campaigns, I specialize in driving organic traffic, boosting rankings, and increasing conversions. My mission is to empower brands with result-oriented digital marketing solutions that deliver measurable success.

Leave a Reply

Your email address will not be published. Required fields are marked *

×

Post your C2C job instantly

Quick & easy posting in 10 seconds

Keep it concise - you can add details later
Please use your company/professional email address
Simple math question to prevent spam