Senior Data Engineer – BigData Engineer, Snowflake, Agentic AI

Posted 16 minutes ago

Apply Now

Resume Score

Check how well your resume matches this job before you apply.

Sign in to check score

About the role

  • Lead Data Engineer / Snowflake Engineer at Brillio in Montreal focusing on Snowflake and Generative AI data solutions. Responsibilities include architecting data platforms and collaborating with AI teams.

Responsibilities

  • Architect, design, and optimize Snowflake data platforms on AWS for high performance and cost efficiency
  • Lead end-to-end ELT/ETL pipelines using Snowflake, AWS services, and modern data engineering tools
  • Implement advanced Snowflake features (Performance Optimization, Warehousing strategy, Data Sharing, Streams & Tasks, Time Travel, Zero Copy Cloning)
  • Design data foundations that support Agentic AI and GenAI workloads, including AI-ready datasets, vectorized data, and metadata-driven pipelines
  • Collaborate with AI/ML teams to enable autonomous agents, LLM-driven analytics, and intelligent data orchestration
  • Provide technical leadership, code reviews, and mentoring to data engineering teams
  • Partner with business and product stakeholders to translate analytics and AI requirements into scalable data solutions

Requirements

  • 8–10 years of experience in Big Data Engineering / Analytics
  • Expert-level Snowflake experience, including large-scale production deployments
  • Strong hands-on experience with AWS (S3, EC2, Lambda, Glue, Redshift/Athena, IAM, CloudWatch, Step Functions)
  • Proven experience building cloud-native data architectures on AWS
  • Solid programming skills in Python and SQL
  • Experience with data modeling for analytics and AI use cases
  • Hands-on or applied exposure to Agentic AI, Generative AI, or AI‑driven data platforms
  • Experience leading or mentoring engineering teams in enterprise environments
  • Highly Desirable
  • Experience integrating LLMs, autonomous agents, or AI orchestration frameworks with data platforms
  • Exposure to vector databases, embeddings, or AI‑optimized data pipelines
  • Experience with dbt, Airflow, Kafka, Spark, or similar tools
  • Prior onsite experience in large, complex enterprise data ecosystems

Job title

Job type

Full Time

Experience level

Senior

Salary

CA$90 - CA$100 per hour

Degree requirement

No Education Requirement

Tech skills

AirflowAmazon RedshiftAWSCloudEC2ETLKafkaPythonSparkSQL

Location requirements

HybridMontrealCanada

Report this job

Found something wrong with the page? Please let us know by submitting a report below.