Location : Dallas, TX (Onsite)
Position type : Contract
Project Duration : Long Term
Visa : USC & H4-EAD only
Detailed JD :
As part of the Mail Analytics Data Engineering team, you will be working on large-scale batch pipelines, data serving, data lakehouse, and analytics systems, enabling mission critical decision making, downstream, AI-powered capabilities, and more.
If you're passionate about building data infrastructure and platforms that power modern Data- and AI-driven business at scale, we want to hear from you!
Your Day
● Partner with Data Science, Product, and Engineering to collect requirements to define the data ontology for Mail Data & Analytics
● Lead and mentor junior Data Engineers to support Yahoo Mail’s ever-evolving data needs
● Design, build, and maintain efficient and reliable batch data pipelines to populate core data sets
● Establish and promote standard methodologies for data operations and lifecycle management
● Develop new or improve and maintain existing large-scale data infrastructures and systems for data processing or serving, optimizing complex code through advanced algorithmic concepts and in-depth understanding of underlying data system stacks
● Create and contribute to frameworks that improve the efficacy of the management and deployment of data platforms and systems, while working with data infrastructure to triage and resolve issues
● Prototype new metrics or data systems
● Define and manage Service Level Agreements for all data sets in allocated areas of ownership
● Engineering consulting on large and complex data lakehouse data
You Must Have
● BS in Computer Science/Engineering, relevant technical field, or equivalent practical experience, with specialization in Data Engineering
● 8+ years of experience building scalable ETL pipelines on industry standard ETL orchestration tools (Airflow, Composer, Oozie) with deep expertise in SQL, PySpark, or scala.
● 3+ years leading data engineering development directly with business or data science partners
● Built, scaled, and maintained Multi-Terabyte data sets and having an expansive toolbox for debugging and unblocking large scale analytics challenges (skew mitigation, sampling strategies, accumulation patterns, data sketches, etc.)
● Experience with at least one major cloud's suite of offerings (AWS, GCP, Azure).
● Developed or enhanced ETL orchestrations tools or frameworks
● Worked within standard GitOps workflow (branch and merge, PRs, CI / CD systems)
● Experience working with GDPR
● Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multitask and manage expectations
Preferred
● MS/PhD in Computer Science/Engineering or relevant technical field, with specialization in Data Engineering
● 3 years experience in Google Cloud Platform technologies (BiqQuery, Dataproc, Dataflow, Composer, Looker)
Looking forward to work with you!
Thanks & Regards
Lokesh Yadav
Sr. Technical Recruiter
CloudThink Tech Inc