Get C2C/W2 Jobs & hotlist update

Looking for Data Architect SME—Remote

Hello,

This is Tejaswini Senior Lead Recruiter from Metasis Information Systems

This is in reference to the following position

 

Data Architect SME

Job Location:-Remote

 

Client is looking for a Data Architect SME resource

Note:- Candidate should have Hands on experience in Databricks +AWS, Data Modeling & Design, PySpark Scripts, SQL Knowledge, Unity Catalog and Security Design, Identity federation, Auditing and Observability system tables/API/external tools, Access control / Governance in UC, External locations & storage credentials, Personal tokens & service principals, Metastore & unity catalog concepts, Interactive vs production workflows, Policies & entitlements, Compute types (incl. UC & non UC, scaling, optimization)

 

 

 

1.            Data Strategy & Architecture Development

 

•              Define and implement the data architecture and data strategy aligned with business goals.

•              Design scalable, cost-effective, and high-performance data solutions using Databricks on AWS, Azure, or GCP.

•              Establish best practices for Lakehouse Architecture and Delta Lake for optimized data storage, processing, and analytics.

 

2.            Data Engineering & Integration Architect ETL/ELT pipelines leveraging Databricks Spark, Delta Live Tables, and Databricks Workflows.

•              Optimize data ingestion from sources like Oracle Fusion Middleware, Web Methods, MuleSoft, and Informatica into Databricks.

•              Ensure real-time and batch data processing with Apache Spark and Delta Lake.

•              Work on data integration strategies, ensuring seamless connectivity with enterprise systems (e.g., Salesforce, SAP, ERP, CRM).

 

3.            Data Governance, Security & Compliance Implement data governance frameworks leveraging Unity Catalog for data lineage, metadata management, and access control.

•              Ensure compliance with HIPAA, GDPR, and other regulatory standards in life sciences.

•              Define RBAC (Role-Based Access Control) and enforce data security best practices using Databricks SQL and access policies.

•              Enable data stewardship and ensure data cataloging for self-service data democratization.

 

4.            Performance Optimization & Cost Management Optimize Databricks compute clusters (DBU usage) for cost efficiency and performance tuning.

 

•              Define and implement query optimization techniques using Photon Engine, Adaptive Query Execution (AQE), and caching strategies.

•              Monitor Databricks workspace health, job performance, and cost analytics.

 

5.            AI/ML Enablement & Advanced Analytics Design and support ML pipelines leveraging Databricks ML flow for model tracking and deployment.

•              Enable AI-driven analytics in genomics, drug discovery, and clinical data processing.

•              Collaborate with data scientists to operationalize AI/ML models in Databricks.

 

6.            Collaboration & Stakeholder Alignment Work with business teams, data engineers, AI/ML teams, and IT leadership to align data strategy with enterprise goals.

 

•              Collaborate with platform vendors (Databricks, AWS, Azure, GCP, Informatica, Oracle, MuleSoft) for solution architecture and support.

•              Provide technical leadership, conduct PoCs, and drive Databricks adoption across the organization.

 

7.            Data Democratization & Self-Service Enablement Implement data sharing frameworks for self-service analytics using Databricks SQL and BI integrations (Power BI, Tableau).

•              Promote data literacy and empower business users with self-service analytics.

•              Establish data lineage and cataloging to improve data discoverability and governance.

 

8.            Migration & Modernization Lead the migration of legacy data platforms (Informatica, Oracle, Hadoop, etc.) to Databricks Lakehouse.

•              Design a roadmap for cloud modernization, ensuring seamless data transition with minimal disruption.

 

Mandatory Key Skills:

1.            Databricks & Spark Expertise Strong knowledge of Databricks Lakehouse architecture (Delta Lake, Unity Catalog, Photon Engine).

•              Expertise in Apache Spark (PySpark, Scala, SQL) for large-scale data processing.

•              Experience with Databricks SQL and Delta Live Tables (DLT) for real-time and batch processing.

•              Understanding of Databricks Workflows, Job Clusters, and Task Orchestration.

 

2.            Cloud & Infrastructure Knowledge Hands-on experience with Databricks on AWS, Azure, or GCP (preferred AWS Databricks).

•              Strong understanding of cloud storage (ADLS, S3, GCS) and cloud networking (VPC, IAM, Private Link).

•              Experience with Infrastructure as Code (Terraform, ARM, CloudFormation) for Databricks setup.

 

3.            Data Modeling & Architecture Expertise in data modeling (Dimensional, Star Schema, Snowflake, Data Vault).

•              Experience with Lakehouse, Data Mesh, and Data Fabric architectures.

•              Knowledge of data partitioning, indexing, caching, and query optimization.

 

4.            ETL/ELT & Data Integration Experience designing scalable ETL/ELT pipelines using Databricks, Informatica, MuleSoft, or Apache NiFi.

•              Strong knowledge of batch and streaming ingestion (Kafka, Kinesis, Event Hubs, Auto Loader).

•              Expertise in Delta Lake & Change Data Capture (CDC) for real-time updates.

 

5.            Data Governance & Security Deep understanding of Unity Catalog, RBAC, and ABAC for data access control.

•              Experience with data lineage, metadata management, and compliance (HIPAA, GDPR, SOC 2).

•              Strong skills in data encryption, masking, and role-based access control (RBAC).

 

6.            Performance Optimization & Cost Management Ability to optimize Databricks clusters (DBU usage, Auto Scaling, Photon Engine) for cost efficiency.

•              Knowledge of query tuning, caching, and performance profiling.

•              Experience monitoring Databricks job performance using Ganglia, CloudWatch, or Azure Monitor.

 

7.            AI/ML & Advanced Analytics

•              Experience integrating Databricks ML flow for model tracking and deployment.

Knowledge of AI-driven analytics, Genomics, and Drug Discovery in life sciences

 

 

Thanks & Regards,

 

Tejaswini badagouni

Sr. Lead –Talent Acquisition

Metasis Information Systems LLC

Www.Metasisnfo.com

Tejaswini.b@Metasisinfo.com

  

 

 

 

Disclaimer:: We respect your online privacy. This is not an unsolicited mail. Under bill 1618 title III passed by the 105th  us congress this mail cannot be considered Spam as long as we include contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails, please reply with a “REMOVE” in the subject line. We apologize for any inconvenience caused by this mail

 

 
 
 

To unsubscribe from future emails or to update your email preferences click here

About Author

JOHN KARY graduated from Princeton University in New Jersey and backed by over a decade, I am Digital marketing manager and voyage content writer with publishing and marketing excellency, I specialize in providing a wide range of writing services. My expertise encompasses creating engaging and informative blog posts and articles.
I am committed to delivering high-quality, impactful content that drives results. Let's work together to bring your content vision to life.

Leave a Reply

Your email address will not be published. Required fields are marked *