Data Engineer responsible for building real-time data pipelines and analytics at Branch. Collaborating on a data platform for high availability and performance handling petabyte-scale data.
Responsibilities
Architect, build, and own real-time and batch data pipelines that power attribution decisions and marketing insights.
Deliver high-quality, reliable datasets for customer dashboards, fraud systems, and internal analytics use cases.
Optimize ingestion and aggregation performance using tools like Flink, Spark, Kafka, and Druid.
Partner with the Data Platform team to make infrastructure-level decisions that impact performance, latency, and cost.
Own schema design, versioning, and deployment of datasets across Iceberg, S3, and other analytical data stores.
Build and maintain robust monitoring, alerting, and self-healing mechanisms to ensure high system availability.
Collaborate cross-functionally with Product, Customer Success, and Data Science to identify and deliver new data capabilities.
Requirements
5+ years of software engineering experience, ideally in data engineering or large-scale backend systems.
Proficiency in SQL and at least one backend language (Java, or Python).
Understanding of distributed systems, data modeling, and real-time data processing.
Hands-on experience with AWS cloud tools and big data platforms such as Kafka, Flink, Spark, Airflow, dbt, Druid.
A solid grasp of data warehousing principles and familiarity with columnar storage formats (Parquet, Avro).
Curiosity and drive to work with event-driven data systems that operate at massive scale.
Strong communication skills and a desire to collaborate across time zones and teams.
Salesforce Data Architect designing and optimizing enterprise - grade data architectures across Salesforce platforms. Collaborating with team members to ensure data quality and readiness for analytics.
Senior Data Engineer with a strong background in Google Cloud services at Valtech. Leading data engineering projects and developing highly available data pipelines.
Sr. Databricks Spark Developer role designing and optimizing data pipelines for banking. Requires Databricks/Spark experience in financial services with strong communication skills.
Data Integration Developer for market risk systems. Responsible for ETL/ELT development, SQL database programming, and supporting risk management systems in a hybrid Mississauga contract role.
Azure & Databricks Data Engineer role designing and building data pipelines using Microsoft tech stack. 11 - month contract, hybrid work in Oshawa, $90 - 95/hr.
Data Engineering Developer responsible for designing and implementing data flows using cloud technologies like AWS and Databricks. Collaborating within a strong data science team to optimize data for machine learning.
Sr. Manager leading data engineering team to optimize data infrastructure for insurance. Driving innovative data solutions and managing cross - functional collaborations within a remote setup.