Data Engineer building core data infrastructure at CoLab for product strategy and insights. Collaborating cross-functionally to transform and process critical data sets.
Responsibilities
Design and build the end-to-end data platform at CoLab—starting with internal product, customer, and business data.
Connect tools like Mixpanel, Pendo, Salesforce, HubSpot, and our product infrastructure into a unified environment.
Use AWS services (Glue, S3, Athena, Redshift Serverless) to build scalable ETL pipelines.
Own and manage data transformation, cataloging, and processing workflows.
Work with CloudOps to inherit foundational infrastructure, then extend and configure it for production use.
Collaborate with stakeholders across RevOps, Finance, Product, and Engineering to understand what data they need—and make it accessible.
Build and maintain clean data layers and support Tableau-based dashboards and reporting tools.
Contribute to automation and ops where it makes data flows more reliable or scalable.
Own data quality, structure, and lineage—this is your domain.
Requirements
7+ years of experience in data engineering or related technical roles
Experience building AWS-based data pipelines using tools like S3, Glue, Redshift, Athena
Proficiency in Python, SQL, and ETL orchestration
Strong analytical thinking and comfort working across business and technical contexts
Familiarity with tools like Airbyte, 5Tran, or other data integration platforms (nice to have)
Experience working with modern BI tools (e.g., Tableau)
Ability to work independently and bring structure to ambiguous environments
Salesforce Data Architect designing and optimizing enterprise - grade data architectures across Salesforce platforms. Collaborating with team members to ensure data quality and readiness for analytics.
Senior Data Engineer with a strong background in Google Cloud services at Valtech. Leading data engineering projects and developing highly available data pipelines.
Sr. Databricks Spark Developer role designing and optimizing data pipelines for banking. Requires Databricks/Spark experience in financial services with strong communication skills.
Data Integration Developer for market risk systems. Responsible for ETL/ELT development, SQL database programming, and supporting risk management systems in a hybrid Mississauga contract role.
Azure & Databricks Data Engineer role designing and building data pipelines using Microsoft tech stack. 11 - month contract, hybrid work in Oshawa, $90 - 95/hr.
Data Engineering Developer responsible for designing and implementing data flows using cloud technologies like AWS and Databricks. Collaborating within a strong data science team to optimize data for machine learning.
Sr. Manager leading data engineering team to optimize data infrastructure for insurance. Driving innovative data solutions and managing cross - functional collaborations within a remote setup.