Big Data Engineer, DBT, Redshift

Posted last month

Apply Now

Resume Score

Check how well your resume matches this job before you apply.

Sign in to check score

About the role

  • Lead Data Engineer designing streaming architectures and data warehouses at Two Circles. Collaborate with clients and integrate streaming data from GCP into AWS environments.

Responsibilities

  • Own the architectural direction for streaming data ingestion from GCP into AWS
  • Design resilient ingestion frameworks including error handling, retry strategies, monitoring, and failure isolation
  • Implement distributed processing pipelines using Spark / PySpark or similar frameworks
  • Create and maintain scalable data warehouses and associated ETL/ELT processes using DBT models in Amazon Redshift
  • Design and implement DBT projects including macros, tests, documentation, and reusable modeling patterns
  • Conduct Redshift query and DBT performance tuning to optimize warehouse efficiency and cost
  • Define and enforce best practices for:
  • Data modeling
  • Version control (Git-based workflows)
  • CI/CD pipelines for DBT deployments
  • Automated testing at model, transformation, and pipeline levels
  • Ensure robust testing is embedded into every DBT model (schema tests, custom tests, data validation checks)
  • Lead code reviews and architectural design reviews
  • Work with AWS services including Redshift, S3, Glue, Step Functions, Lambda (Python), Athena, and EMR

Requirements

  • 6+ years of data engineering experience in big data environments
  • Proven experience designing and implementing streaming architectures
  • Extensive hands-on DBT experience (models, macros, tests, documentation)
  • Strong Amazon Redshift architecture and performance optimization expertise
  • Experience building CI/CD pipelines for data platforms
  • Experience working in client-facing delivery contexts
  • Technical Skills
  • AWS: Redshift, S3, Glue, Step Functions, Lambda (Python), Athena, EMR
  • Strong SQL and Redshift performance tuning expertise
  • Python and PySpark (or equivalent distributed processing frameworks)
  • Git-based version control workflows
  • Deep understanding of data warehousing, modeling, and big data systems

Job title

Job type

Contract

Experience level

Mid levelSenior

Salary

CA$450 - CA$500 per day

Degree requirement

No Education Requirement

Tech skills

Amazon RedshiftAWSETLGoogle Cloud PlatformPySparkPythonSparkSQL

Location requirements

HybridVancouverCanada

Report this job

Found something wrong with the page? Please let us know by submitting a report below.