Resume Score

Check how well your resume matches this job before you apply.

Sign in to check score

About the role

  • MLOps Data Engineer bridging data science and production systems at Triton Digital. Designing CI/CD pipelines and optimizing data processing with Apache Spark for advertising systems.

Responsibilities

  • Design, implement, and maintain CI/CD pipelines for machine learning workflows using tools like GitHub Actions, Azure DevOps, or Jenkins.
  • Build and optimize data processing pipelines in Apache Spark (PySpark and Scala) for large-scale, distributed listener datasets.
  • Deploy and manage Databricks environments, ensuring efficient cluster usage, job scheduling, and cost optimization.
  • Collaborate with data scientists to productionize ML models, integrating them into scalable APIs or batch processing systems that feed real-time, machine-readable audience signals.
  • Implement automated testing, monitoring, and alerting for ML pipelines to ensure the reliability and reproducibility that certified buyers require.
  • Champion best practices in version control, model registry management, and environment reproducibility.
  • Help evolve our listener data infrastructure toward agent-compatible supply — live, structured, queryable data feeds that autonomous buying systems can discover and act on without human mediation.

Requirements

  • Proven experience in Data Engineering, MLOps, and DevOps roles with a focus on automation and scalability.
  • Strong programming skills in Python, with hands-on experience in Apache Spark.
  • Scala is a huge plus.
  • Advanced expertise in Databricks, including Delta Lake, structured streaming, feature engineering.
  • Solid understanding of CI/CD principles and tools (e.g., GitHub Actions, Jenkins, Azure DevOps, GitLab CI, ArgoCD).
  • Familiarity with cloud platforms (AWS, Azure, or GCP) for data and ML workloads.
  • A problem-solving mindset and the ability to work closely with cross-functional teams.
  • Strong architectural mindset, capable of evaluating trade-offs across cost, performance, scalability, and maintainability when selecting tools and designing systems.
  • Experience working with containerized and orchestrated environments (Kubernetes / OpenShift), including deployment, scaling, and fault tolerance of data and ML workloads.
  • Advanced English required.
  • French is an asset.
  • Familiarity with IAB data standards, programmatic advertising infrastructure, or AdTech data pipelines is a strong asset.

Benefits

  • Fully remote position (must be based in ONTARIO or QUEBEC)
  • 4 weeks of vacation + 5 paid personal days annually
  • Group insurance programs as of your first day, including access to telemedicine and an EAP
  • Collective RRSP with matching contribution
  • Internet reimbursement and more

Job title

Job type

Full Time

Experience level

Mid levelSenior

Salary

Not specified

Degree requirement

Bachelor's Degree

Tech skills

ApacheAWSAzureCloudGoogle Cloud PlatformJenkinsKubernetesOpenShiftPySparkPythonScalaSpark

Location requirements

RemoteCanada

Report this job

Found something wrong with the page? Please let us know by submitting a report below.