About the role

  • Data Engineer developing and maintaining data pipelines leveraging analytics for Samsara’s Data Platform. Working collaboratively to integrate diverse data sources and ensure data-driven decision-making.

Responsibilities

  • Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara’s Data Platform to enable advanced automation and analytics.
  • Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
  • Manage critical data pipelines to enable our growth initiatives and advanced analytics.
  • Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
  • Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
  • Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
  • Champion, role model, and embed Samsara’s cultural principles as we scale globally and across new offices.

Requirements

  • A Bachelor’s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
  • 3+ years of experience in data engineering, ETL development, or database architecture.
  • 3+ years of experience in building/maintaining a large-scale production-grade end-to-end data pipelines, including Data Modeling.
  • Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
  • Experience with leading end-to-end projects, including being the central point of contact to stakeholders.
  • Engage directly with internal cross-functional stakeholders to understand their data needs and design scalable solutions.
  • 3+ years in Python, SQL.
  • Exposure to ETL tools such as Fivetran, DBT or equivalent.
  • Exposure to python based API frameworks for data pipelines.
  • RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
  • Cloud: AWS, Azure and/or GCP.
  • Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.

Benefits

  • Health benefits
  • Employee-led remote and flexible working
  • Competitive total compensation package

Job title

Job type

Full Time

Experience level

Mid levelSenior

Salary

CA$104,550 - CA$135,300 per year

Degree requirement

Bachelor's Degree

Tech skills

Amazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformMySQLOraclePostgresPythonRDBMSSQL

Location requirements

RemoteCanada

Report this job

Found something wrong with the page? Please let us know by submitting a report below.