Azure/Databricks Data Engineer designing data-driven applications. Build data pipelines, collaborate with cross-functional teams, and work with Azure Stack tools in a hybrid environment.
Responsibilities
As an Azure and Databricks Data Engineer, the role focuses on designing, building, and supporting data‑driven applications that enable innovative, customer‑centric digital experiences. Work as part of a cross‑discipline agile team, collaborating to solve problems across business areas. Build reliable, supportable, and performant data lake and data warehouse products to support reporting, analytics, applications, and innovation. Apply best practices in development, security, accessibility, and design to deliver high‑quality services. Develop modular and scalable ELT/ETL pipelines and data infrastructure leveraging diverse enterprise data sources. Create curated common data models in collaboration with Data Modelers and Data Architects to support business intelligence, reporting, and downstream systems. Partner with infrastructure, cyber teams, and Senior Data Developers to ensure secure data handling in transit and at rest. Clean, prepare, and optimize datasets with strong lineage and quality controls throughout the integration cycle. Support BI Analysts with dimensional modeling and aggregation optimization for visualization and reporting. Collaborate with Business Analysts, Data Scientists, Senior Data Engineers, Data Analysts, Solution Architects, and Data Modelers. Work with Microsoft Stack tools including Azure Data Factory, ADLS, Azure SQL, Synapse, Databricks, Purview, and Power BI. Operate within an agile SCRUM framework, contributing to backlog development and using Kanban/SCRUM toolsets. Develop performant pipelines and models using Python, Spark, and SQL across XML, CSV, JSON, REST APIs, and other formats. Create tooling to reduce operational toil and support CI/CD and DevOps practices for automated delivery and release management. Monitor in‑production solutions, troubleshoot issues, and provide Tier 2 dataset support. Implement role‑based access control and perform automated unit, regression, UAT, and integration testing.
Requirements
Completion of a four‑year university program in computer science, engineering, or related data disciplines. Experience designing and building data pipelines, with strong Python, PySpark, SparkSQL, and SQL skills. Experience with Azure Data Factory, ADLS, Synapse, and Databricks, and building pipelines for Data Lakehouses and Warehouses. Strong understanding of data structures, governance, and data quality principles, with effective communication skills for technical and non‑technical audiences.
Senior Data Engineer leading scalable data architecture design and development at Miratech. Collaborating with teams to ensure delivery of secure and efficient data solutions.
Azure Databricks Architect designing Azure data solutions with Databricks, Data Factory, Synapse Analytics. Builds Medallion architecture, data pipelines, AI/ML workflows, and serves as technical authority.
Lead Data Engineer helping to build data management platforms for sports industry using Azure and Python. Join a team of over 100 Data Engineers to unlock data potential.
Data Engineer at Borrowell maintaining data infrastructure and developing solutions with modern technologies. Empowering Canadians financially through data - driven insights and applications.
Staff Data Engineer responsible for modernizing analytics platforms and ensuring data governance across business domains at Dropbox. Collaborating with cross - functional teams for efficient data pipelines and governance standards.
Data Engineer Student role at Canada Life focusing on connected data products for Canadian business needs. Collaborating with data teams to support analytics and decision - making initiatives.
Lead technical delivery for a Law 25 regulatory project in banking. Work with cloud, large data sets, analytics, and regulatory reporting in a hybrid Toronto role.
Sr. Data Engineer with Leadership Experience for contract role in Toronto. Requires 7+ years experience, SQL/Python/R skills, and leadership capabilities.