Data Engineer

Posted via LinkedIn Recruiter (not a company profile)

Posted 6 hours ago

Apply Now

About the role

  • Azure/Databricks Data Engineer designing data-driven applications. Build data pipelines, collaborate with cross-functional teams, and work with Azure Stack tools in a hybrid environment.

Responsibilities

  • As an Azure and Databricks Data Engineer, the role focuses on designing, building, and supporting data‑driven applications that enable innovative, customer‑centric digital experiences. Work as part of a cross‑discipline agile team, collaborating to solve problems across business areas. Build reliable, supportable, and performant data lake and data warehouse products to support reporting, analytics, applications, and innovation. Apply best practices in development, security, accessibility, and design to deliver high‑quality services. Develop modular and scalable ELT/ETL pipelines and data infrastructure leveraging diverse enterprise data sources. Create curated common data models in collaboration with Data Modelers and Data Architects to support business intelligence, reporting, and downstream systems. Partner with infrastructure, cyber teams, and Senior Data Developers to ensure secure data handling in transit and at rest. Clean, prepare, and optimize datasets with strong lineage and quality controls throughout the integration cycle. Support BI Analysts with dimensional modeling and aggregation optimization for visualization and reporting. Collaborate with Business Analysts, Data Scientists, Senior Data Engineers, Data Analysts, Solution Architects, and Data Modelers. Work with Microsoft Stack tools including Azure Data Factory, ADLS, Azure SQL, Synapse, Databricks, Purview, and Power BI. Operate within an agile SCRUM framework, contributing to backlog development and using Kanban/SCRUM toolsets. Develop performant pipelines and models using Python, Spark, and SQL across XML, CSV, JSON, REST APIs, and other formats. Create tooling to reduce operational toil and support CI/CD and DevOps practices for automated delivery and release management. Monitor in‑production solutions, troubleshoot issues, and provide Tier 2 dataset support. Implement role‑based access control and perform automated unit, regression, UAT, and integration testing.

Requirements

  • Completion of a four‑year university program in computer science, engineering, or related data disciplines. Experience designing and building data pipelines, with strong Python, PySpark, SparkSQL, and SQL skills. Experience with Azure Data Factory, ADLS, Synapse, and Databricks, and building pipelines for Data Lakehouses and Warehouses. Strong understanding of data structures, governance, and data quality principles, with effective communication skills for technical and non‑technical audiences.

Job title

Job type

Contractor

Experience level

Not specified

Salary

$92 per hour

Degree requirement

Bachelor's degree

Tech skills

Azure Data FactoryADLSAzure SQLSynapseDatabricksPurviewPower BIPythonSparkSQLPySparkSparkSQL

Location requirements

Linkedin Recruiter PostOshawaOntario

Report this job

Found something wrong with the page? Please let us know by submitting a report below.