Get C2C/W2 Jobs & hotlist update

DATA ENGINEER/ ARCHITECT (COGNOS_ODI_SNOWPRO CERTIFIED) AVAILABLE

 

 

(THIS IS NOT REQUIREMENT, THIS IS RESUME IM SHARING, DON’T SEND ME RESUMES AGAIN)

 

Hi Friendz

 

Hope you are doing stupendous!

 

 

Please find enclosed DATA ENGINEER/ ARCHITECT (COGNOS_ODI_SNOWPRO CERTIFIED).

 

Please let us know if you have any requirements for this consultant.

 

  • 15+ Years of IT experience in analysis, design and development of Data warehouse & Business Intelligence applications (work + education project/work) using CA Erwin, Cognos products & Oracle.
  • Extensive experience on the usage of

ETL Tools: Informatica, ODI, Cognos Data Manager.

BI Tools: BOBJ, Cognos10/Cognos8 (Framework Manager, Report Studio, Query Studio, Event Studio, Cognos Connection), OBIEE (Answers/BI Admin Tool), QlikView, Tableau.

Cloud: Azure Data Lake, Azure Data Factory, Azure Synapse Analytics.

     Azure Databricks, Azure SQL database, Azure SQL Datawarehouse.

     Python, Spark Core, Spark SQL, Spark Streaming.

  • Around 4 years of experience in Snowflake data warehouse development and Snowpro Core certified.
  • Experience in data modeling, designing, and data analysis with Conceptual, Logical and Physical Modeling for Online Transaction Processing and Online Analytical Processing (OLTP & OLAP).
  • Extensive experience on Data warehouse/Data mart Design, Development, Data modeling and designing star schemas. Solid understanding of data warehousing, OLTP and OLAP concepts.
  • 5+ EDW/BI Architecture, 15 years ETL/BI, extensive performance tuning experience.
  • Experience with PeopleSoft, Salesforce ERP & HEALTHQUEST, STAR, Meditech, EPIC (Clarity) EHR systems.
  • Extensive experience in writing View, MV’s, T-SQL & PL/SQL scripts for Oracle/sql server/Teradata DWH environment.
  • Experience in working on Multiple Relational Databases including Oracle, Teradata, MS SQL.
  • Experience in SQL, PL/SQL, T-SQL, Stored Procedures, Functions, Packages, Triggers and scripting languages.
  • Experience in Leading on-site and offshore developers and timely resolution of issues.
  • Responsible for interacting with business partners to identify information needs and business requirements for reports.
  • Involved in Production/Customer Support, Deployment, Development and Integration.
  • Having strong communication and analytical skills with a desire to learn advancements in the IT industry.

Data Engineering:

  • Experienced in transforming homogeneous data into vital information to serve the necessities of clients/customers using Apache Spark, Python &SQL and its Ecosystems.
  • Implementation of Azure Cloud Components-Azure Data Factory, Azure Data Analytics, Azure Data Lake, Azure Data Bricks, Azure Synapse Analytics, Azure Data Store, Azure SQL DB/DW, Power BI, U-SQL and T-SQL
  • Expertise in developing and creating the Azure Blob, SQL DB and launching the windows and Linux virtual machines in Azure.
  • Working knowledge on Azure Services like Azure SQL Data base, Virtual Network, Azure Active Directory and Power BI.
  • Very good hands-on in Spark Core, PySpark, Spark SQL, Spark Streaming.
  • In-depth knowledge and experience in building the Spark applications using Python.
  • Basic knowledge in exporting data to power BI to generate the reports.

 

CERTIFICATIONS

  • IBM Certified Solution Expert – Cognos BI (IBM Cognos 8 BI Author, IBM Cognos 8 BI Metadata Model Developer and IBM Cognos 8 BI Administrator).
  • Oracle Data Integrator 11g Certified Implementation Specialist.
  • Databricks Data Engineer Associate – 2022.

 

 

TECHNICAL SKILLS

Data Modeling/

Data Warehousing Tools:      Informatica Power Mart/ Power Center, IBM Infosphere Data Stage 8.5, ODI 10g/11g/12c.

BI/ OLAP Tools:             Cognos 10/8, Power Play, Cognos Query, OBIEE, Cognos Administration Console 8.4, Transformer, Tableau Server, Tableau Desktop.

Databases:                        SQLServer2008/2005/2000/7.0, Oracle 12c/11.x, 10.x/9i/8i, Exadata, Snowflake, Redshift.

Case Tools:                       Erwin, VISIO.

Operating Systems:         Windows XP/2007/10, UNIX, MS-DOS

Languages:                       SQL, PL/SQL, T-SQL, Shell Script, JavaScript, HTML, VB, ASP, ASP.Net and XML.

Others:                              SQL Query Analyzer, SQL Enterprise Manager, Oracle Golden Gate (OGG), ODBC, Office2000, Excel, Toad, Control-M, CA ESP.

 

 

PROFESSIONAL EXPERIENCE

 

Client: Health, MI                                                          Nov 2016 – Current                                                             

Project II: Lead Data Engineer/Architect (Mar 2018 – present)

  • Worked on migration project to migrate data from legacy system (Oracle/Teradata) to snowflake using AZURE account.
  • Worked on implementing ELT in Snowflake data warehouse and data marts same as legacy system (Oracle).
  • Worked on Snowflake creating data warehouse, data marts, databases, virtual warehouses, shared objects, stage objects, file format objects.
  • Worked on creating ingestion framework to migrate data from Azure ADLS to snowflake using Snowpipe, streams & tasks.
  • Worked on Creating Snowpipe for continuous load files from Azure account to snowflake raw tables, based on AWS event notification.
  • Worked on complete and delta loads from source to snowflake.
  • Worked with Bulk load COPY and Snowpipe for continuous loading of csv files from AWS account to raw tables based on AWS event notification.
  • Worked on Streams which will load the data from RAW tables to Staging tables and these Streams are scheduled using TASK Scheduler to load the data in Staging tables as soon as Stream has data.
  • Developed Stored Procedures to validate the row count of the rows extracted to that of the rows loaded to raw/ODS.
  • Worked on Zero copy clone, Time travel of the data, Aggregation & Analytical SQL on structure data and VARIANT datatype data, querying snowflake staged files directly.
  • Knowledge of snowflake architecture, snowflake data caching, Access Management, Snow SQL CLI, loading semi structured JSON files.
  • Used schema change to deploy sql scripts using CI/CD pipelines.
  • Drive discussions with source teams to do mappings between Source and Stage and RAW ODS tables.
  • Lead and participate in functional and technical discussions to translate functional requirements to technical requirements and worked in Agile Environment.

 

Environment: Spark core, Python, Spark SQL, Azure Databricks, Azure Data FactoryV2, Azure synapse, Azure Data Lake Gen2.    

 

Project I: Lead ETL Developer                                                 Nov 2016 – Mar 2018

  • Study and understanding the Business Scenarios of the existing systems, translating business requirements to ETL & BI design.
  • Expert in working on all activities related to the development, implementation, and support of ETL processes for large-scale data warehouses using Power Center.
  • Develop Informatica technical design documentation to load data from legacy systems into                                     Staging and Data warehouse tables.
  • Extensively worked in performance tuning of programs, ETL procedures and processes. Also used debugger to troubleshoot logical errors.
  • Extensively provided production support to 4000+ ETL jobs that are in informatica, oracle PL/SQL (code developed back in 2000’s), shell scripts, cron/control-m scheduling that load to Oracle & Teradata.
  • Build Informatica mappings, workflows to process data into the different dimension and fact tables.
  • Used Informatica Designer to create source, target definitions, mappings and sessions to extract, transform and load data into staging tables from various sources.
  • Developed complex mappings using Informatica Power Centre Designer to transform and load the data from various source systems like Flat files, XML, Oracle to Oracle target database.
  • Used Debugger wizard to remove bottlenecks at source level, transformation level, and target level for the optimum usage of sources, transformations and target loads.
  • Worked extensively on Stored Procedures, triggers, views and indexes by using SQL* Plus, PL/SQL in SQL Server and Oracle 12c/18c.
  • Used Oracle performance tuning techniques to optimize SQL queries used in Informatica and part of PL/SQL code.
  • Involved in migrating Oracle DB procedure code to Teradata. Along with data model changes.
  • Extensively used Teradata TPT via informatica to load data to ODS tables. Dynamic SQL code in Teradata to create TYPE-I, TYPE-II for source data that was loaded to ODS.
  • Used BTEQ, Fastload scripts via shell to load data Teradata.
  • Suggestions on Teradata PI, NUPI, PPI, USI based on the joins in the ETL code and the BI Layer.
  • Worked on Claims data, source EHR systems like EPIC, HQ & Meditech. Extensive experience on Financial Month end closing.
  • Performed Code Review in all the environments.

Environment:  Informatica 10.2, Business Objects 4.0, Oracle 12c/18c, Teradata 16.2, Informatica MDM 9.0, SQL Developer 4.1.3, CONTROL-M, SQL Server 2016

 

 

 

Thanks & Regards,

KUMAR MAK

Zuven Technologies Inc

2222 West Spring Creek PKWY,

Suite 102, Plano, TX -75023

(M) 267-594-1520,  469-581-7797 (Desk)

Fax: 469 718 0405

Email: mak@zuventech.com

Website: www.zuventech.com

ALWAYS REWARD EFFORTS, LATER OUTCOME.

 

About Author

JOHN KARY graduated from Princeton University in New Jersey and backed by over a decade, I am Digital marketing manager and voyage content writer with publishing and marketing excellency, I specialize in providing a wide range of writing services. My expertise encompasses creating engaging and informative blog posts and articles.
I am committed to delivering high-quality, impactful content that drives results. Let's work together to bring your content vision to life.

Leave a Reply

Your email address will not be published. Required fields are marked *