Data Engineer at Wave, building tools and infrastructure for data products and insights to support small businesses. Collaborating with teams for data-centric organizational transformation.
Responsibilities
Design, build, and deploy components of a modern data platform, including CDC-based ingestion using Debezium and Kafka, a centralized Hudi-based data lake, and a mix of batch, incremental, and streaming data pipelines.
Maintain and enhance the Amazon Redshift warehouse and legacy Python ELT pipelines, while driving the transition to a Databricks and dbt–based analytics environment that will replace the current stack.
Build fault-tolerant, scalable, and cost-efficient data systems, and continuously improve observability, performance, and reliability across both legacy and modern platforms.
Partner with cross-functional teams to design and deliver data infrastructure and pipelines that support analytics, machine learning, and GenAI use cases, ensuring timely and accurate data delivery.
Work autonomously to identify and implement opportunities to optimize data pipelines and improve workflows under tight timelines and evolving requirements.
Respond to PagerDuty alerts, troubleshoot incidents, and proactively implement monitoring and alerting to minimize incidents and maintain high availability.
Provide technical guidance to colleagues, clearly communicating complex concepts and actively listening to build trust and resolve issues efficiently.
Assess existing systems, improve data accessibility, and deliver practical solutions that enable internal teams to generate actionable insights and enhance the experience of our external customers.
Requirements
Data Engineering Expertise: 3+ years of experience building data pipelines and managing a secure, modern data stack, including CDC streaming ingestion (e.g., Debezium) into data warehouses that support AI/ML workloads.
AWS Cloud Proficiency: At least 3 years of experience working with AWS cloud infrastructure, including Kafka (MSK), Spark / AWS Glue, and infrastructure as code (IaC) using Terraform.
Data modelling and SQL: Fluency in SQL, strong understanding of data modelling principles and data storage structures for both OLTP and OLAP.
Databricks experience: Experience developing or maintaining a production data system on Databricks is a significant plus.
Strong Coding Skills: Experience writing and reviewing high-quality, maintainable code to improve the reliability and scalability of data platforms, using Python, SQL, and dbt, and leveraging third-party frameworks as needed.
Data Lake Development: Prior experience building data lakes on S3 using Apache Hudi with Parquet, Avro, JSON, and CSV file formats.
CI/CD Best Practices: Experience developing and deploying data pipeline solutions using CI/CD best practices to ensure reliability and scalability.
**Bonus points for:**
Data Governance Knowledge: Familiarity with data governance practices, including data quality, lineage, and privacy, and experience using data cataloging tools to support discoverability and compliance.
Data Integration Tools: Working knowledge of tools such as Stitch and Segment CDP for integrating diverse data sources into a cohesive ecosystem.
Analytical and ML Tools Expertise: Experience with Athena, Redshift, or SageMaker Feature Store for analytics and ML workflows is a plus.
Benefits
Bonus Structure
Employer-paid Benefits Plan
Health & Wellness Flex Account
Professional Development Account
Wellness Days
Holiday Shutdown
Wave Days (extra vacation days in the summer)
Get A-Wave Program (work from anywhere in the world up to 90 days)
Salesforce Data Architect designing and optimizing enterprise - grade data architectures across Salesforce platforms. Collaborating with team members to ensure data quality and readiness for analytics.
Senior Data Engineer with a strong background in Google Cloud services at Valtech. Leading data engineering projects and developing highly available data pipelines.
Sr. Databricks Spark Developer role designing and optimizing data pipelines for banking. Requires Databricks/Spark experience in financial services with strong communication skills.
Data Integration Developer for market risk systems. Responsible for ETL/ELT development, SQL database programming, and supporting risk management systems in a hybrid Mississauga contract role.
Azure & Databricks Data Engineer role designing and building data pipelines using Microsoft tech stack. 11 - month contract, hybrid work in Oshawa, $90 - 95/hr.
Data Engineering Developer responsible for designing and implementing data flows using cloud technologies like AWS and Databricks. Collaborating within a strong data science team to optimize data for machine learning.
Sr. Manager leading data engineering team to optimize data infrastructure for insurance. Driving innovative data solutions and managing cross - functional collaborations within a remote setup.