Graph Data Engineer
Location: NYC, NY – 3 Days Onsite – Local to NJ/NY only who can take F2F interview
Interview: Onsite Interview Required (No Virtual)
Duration: Long Term Contract
Domain: Banking (Required)
Education: Bachelor’s or Master’s Degree from any USA based University Required
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related discipline (U.S. degree required).
- 4+ years of professional software development experience.
- Strong understanding of algorithms and data structures.
- Proficiency in Java with Spring Boot.
- Experience developing APIs and microservices.
- Familiarity with ETL processes and data quality validation.
- Experience working with cloud platforms such as AWS, Azure, or GCP.
- Ability to work with or quickly learn graph technologies and graph-based environments.
Preferred Skills
- Experience with graph technologies such as RDF, RDFS, SPARQL, SHACL, and Cypher.
- Experience with graph databases including Neptune, AllegroGraph, GraphDB, Neo4j, or TigerGraph.
- Experience building knowledge graphs, ontologies, or semantic data systems.
- Familiarity with graph and machine learning algorithms such as PageRank, Connected Components, and Cosine Similarity.
- Experience with Python libraries including Pandas, NumPy, and Scikit-learn.
- Exposure to Spark and Databricks.
- Understanding of the full software development lifecycle including integration and UI development.
Position Overview
We are seeking a Graph Data Engineer to design, develop, and enhance Jefferies’ enterprise Knowledge Graph platform supporting multiple business applications. You will work closely with software engineers, data scientists, and product managers to develop scalable graph-based solutions.
The ideal candidate will be responsible for data ingestion, graph modeling, ontology design, and query optimization to enable advanced analytics and relationship-driven insights across the organization.
Key Responsibilities
- Design and develop graph-based data solutions using modern graph technologies.
- Implement graph algorithms to model and analyze complex relationships in large datasets.
- Design and maintain graph schemas and ontologies for dynamic and interconnected data systems.
- Develop and maintain ETL pipelines for ingesting and enriching data in the knowledge graph.
- Implement data quality checks to ensure data accuracy, validity, and integrity.
- Develop APIs and microservices supporting graph data applications.
- Build and support data engineering applications in AWS.
- Work in an Agile environment and collaborate with cross-functional teams including Data Engineering, Data Science, QA, Infrastructure, and Frontend teams.
—
adarsh@staffxpertllc.com
—