Cribl Engineer
Bellevue, WA
Long Term Contract
We are seeking a skilled Cribl Engineer to lead the design and implementation of data pipelines that ingest various sources of data into Snowflake. This role requires deep expertise in Cribl Stream/Edge, hands-on experience with Syslog sources, and a strong understanding of data formatting, transformation, and routing to Snowflake’s Data Cloud platform. The ideal candidate will help build a scalable observability data pipeline that supports both operational and analytical use cases.
Key Responsibilities:
Data Pipeline Design & Implementation
Build and manage Cribl pipelines to collect, normalize, transform, and route Syslog data from various network and security devices.
Configure Cribl Stream and/or Edge nodes to route parsed data into Snowflake via supported destinations (e.g., HTTP, S3, or Kafka connectors).
Define schemas, ensure data integrity, and optimize data for downstream use in analytics or security reporting.
Integration & Automation
Create and manage integrations between Cribl and Snowflake, including use of Cribl packs, S3 stages, or direct ingestion patterns.
Automate data delivery and transformations using JavaScript functions, regex, and structured logging practices.
Work with DevOps and Data Engineering teams to automate deployment using Git, CI/CD pipelines, or infrastructure-as-code tools.
Monitoring & Optimization
Monitor data flow pipelines for failures, latency, and volume spikes using Cribl’s monitoring tools.
Tune pipeline performance and implement alerting for data loss, schema drift, or ingestion anomalies.
Optimize cost and resource utilization in both Cribl and Snowflake environments.
Enforce data masking, filtering, and governance rules to meet security and compliance standards (e.g., masking PII).
Required Skills & Qualifications:
2+ years of hands-on experience with Cribl Stream/Edge in a production setting.
Deep understanding of Syslog protocols (UDP/TCP, RFC 3164/5424) and log source configurations (firewalls, routers, Linux systems).
Experience routing structured/unstructured data into Snowflake, using Cribl S3 outputs or HTTP connectors.
Proficient in JavaScript, regex, and Cribl functions for data shaping and enrichment.
Familiarity with Snowflake Data Cloud, including data loading, file formats (CSV, JSON, Parquet), and performance tuning.
Working knowledge of cloud platforms (AWS/Azure/GCP), especially services like S3, IAM, and monitoring tools.
Nice to Have:
Cribl Certified Observability Engineer (CCOE) or equivalent certification.
Experience with data pipeline tools like Kafka, Logstash, or Fluent Bit.
Familiarity with Snowflake features such as Snowpipe, Streams, and Tasks for automated ingestion.
Bachelor’s degree in Computer Science, Information Systems, Cybersecurity, or equivalent work experi
To unsubscribe from future emails or to update your email preferences click here