Data Engineer owning the full data stack for Practice Better, a health and wellness platform. Collaborating with teams to ensure reliable data and developing AI-assisted workflows.
Responsibilities
Own and evolve the full data infrastructure: Snowflake, dbt, Stitch, Rivery, and pipeline orchestration
Build and maintain ELT/ETL pipelines and dbt models supporting analytics, reporting, and the AI warehouse agent
Manage Snowflake for performance, cost, and reliability
Build and maintain integrations between production systems — Stripe, Square, HubSpot, and others — and the data warehouse
Implement data quality monitoring, testing frameworks, and alerting so problems surface before they reach stakeholders
Trace anomalies to root causes and ensure leadership is making decisions on reliable data
Establish and maintain data governance standards: documentation, access controls, and metric definitions
Partner with Product and Engineering to instrument event tracking and ensure complete data capture across the customer journey
Design and evolve AI-assisted data workflows, including our internal warehouse agent, ensuring they’re powered by clean, well-modeled, trustworthy data
Bring strong intuition for AI failure modes and how to constrain systems for reliability
Enable stakeholders to self-serve through AI-assisted tooling; your work is what makes that possible
Partner with the Head of RevOps on data model integrity, executive reporting, and prioritizing what gets built or fixed first
Work closely with the analytics team to understand how data is consumed across Sigma, Amplitude, Hightouch, and the AI warehouse agent, so infrastructure decisions account for downstream impact
Partner with GTM stakeholders across Growth, Customer Success, Marketing, and Payments to understand data dependencies, surface issues early, and translate business needs into technical requirements
Requirements
5+ years as a data engineer, analytics engineer, or backend engineer with a data focus
Experience as the primary or sole data engineer on a small, high-growth team (you’ve owned a full stack end-to-end, not just a slice of one)
Strong SQL and hands-on experience with dbt, Snowflake, Stitch, and Rivery
Python proficiency for pipeline development and automation
Experience building and optimizing ELT/ETL pipelines at scale, with strong understanding of data warehouse architecture and dimensional modeling
A pragmatic approach to AI — focused on outcomes, not demos. You’ve built or worked with AI-assisted data workflows in production, understand their failure modes, and know how to constrain them for reliability
Comfort working with large, imperfect datasets. You diagnose data quality issues independently and don’t wait for someone to hand you a spec
Experience with version control (Git), CI/CD practices, and infrastructure as code
Clear, proactive written communication. You document your work so the next person isn’t starting from scratch.
Benefits
Comprehensive health and dental benefits from day 1
RRSP matching
Generous paid parental leave
Annual learning stipends
Unlimited vacation
$750 annual Health & Wellness Allowance
$1,000 annual Learning & Development Allowance to support your growth
$500 annual Home Office Allowance to set up a productive remote workspace
Personalized support for family-building and fertility journeys
Confidential, digital mental health support from licensed professionals
Company-wide holiday closure in December
Regular virtual company-wide events, lunches, and team socials to stay connected
Data Engineer helping to improve ETL processes for investment analyses at The Battle of Giants. Collaborating directly with leadership to shape strategies and insights.
Data Engineer at Tiger Analytics architecting scalable Generative AI solutions in the AWS ecosystem for Fortune 500 partners. Joining a team with deep expertise in Data Science and Machine Learning.
Senior Information Architect/Data Engineer working with a global software services provider. Leading the architecture of a new cloud data platform for innovative technology solutions.
Senior Software Developer modernizing Data Transfer Platform for Intrahealth, a healthcare EMR provider. Focusing on scalable and configurable backend systems in a complex environment.
Data Engineer Intern gaining hands - on experience in TD's big data platform. Collaborating on software development and system enhancements while learning about analytical tools and technologies.
Senior Data Engineer at Mozilla managing data lifecycle and quality. Building data pipelines and collaborating with product teams for data - driven decisions.
Principal Product Manager leading product strategy for health data platform at PointClickCare. Collaborating across teams to optimize health data for analytics and care delivery.
Data Engineer optimizing and maintaining data pipelines in Blackline Safety's IoT - enabled safety ecosystem. Collaborating with product, engineering, and analytics teams on impactful data - driven initiatives.
Azure Data Engineer contractor for Ontario Crown Corporation. Design/build data pipelines using Azure Data Factory, Databricks, Python, PySpark. 3 days/week onsite in Oshawa.