Data Engineer role focusing on Azure Data Factory & Databricks for building data pipelines. 11-month contract, hybrid work in Oshawa, $90-$95/hour.
Responsibilities
Design, develop, and productionize modular, scalable ELT/ETL pipelines using Azure Data Factory and Databricks. Build and maintain data lake and data warehouse solutions supporting analytics, applications, and innovation. Cleanse, transform, and optimize large datasets using Python, PySpark, Spark SQL, and SQL. Develop curated, business-centric common data models in collaboration with Data Architects and Data Modelers. Implement data quality, lineage, and governance controls throughout the data lifecycle. Ingest data from multiple sources including CSV, JSON, XML, REST APIs, and enterprise systems. Optimize pipeline performance, reliability, and scalability. Troubleshoot data ingestion, transformation, latency, accuracy, and integrity issues. Collaborate with Business Analysts, Data Scientists, Senior Data Engineers, and Architects. Support BI and analytics use cases, including dimensional modeling and aggregation optimization. Participate in CI/CD pipelines, DevOps workflows, and automated testing strategies. Contribute to metadata management and data cataloging. Provide Tier-2 support for production data pipelines and datasets. Participate in peer code reviews and agile SCRUM ceremonies.
Requirements
Proven experience building new data pipelines using Azure Data Factory (ADF) for orchestration and Azure Databricks (Spark/PySpark) for transformations. Hands-on expertise with Azure Data Factory and Azure Databricks. Strong programming skills in Python, PySpark, Spark SQL, and SQL. Experience building data lakehouse and data warehouse pipelines. Strong understanding of data structures, data integration patterns, and processing frameworks. Knowledge of data governance, security, and data quality principles. Experience working in an Agile/SCRUM environment. Bachelor's degree in Computer Science, Software Engineering, or related field.
Data Engineer building data integration pipelines for data lakes and warehouses. Collaborating with stakeholders to meet business requirements in a leading publishing company.
Google Cloud Data Engineer implementing data ingestion and analytics frameworks at Fueled. Specializing in Google Cloud Platform and modern data modeling.
Consulting Senior Data Architect specializing in Microsoft Fabric solutions for digital products. Engage in hands - on delivery, architecture, and governance for data engineering in a remote capacity.
Data Engineer at Motive delivering data infrastructure for the AI era. Collaborating with stakeholders, building models, and implementing innovative tooling.
Data Architect designing and governing data foundations for analytics and AI applications at Clio. Collaborating cross - functionally to develop high - quality data models and standards.
IAM/Data Engineer role in Toronto (Hybrid). Requires 4+ years in ETL, data pipelines, cloud platforms, and skills in Windows IAM, Ansible, Terraform, SQL, Python/Java, Spark/Kafka.
Data Migration Specialist managing client data migrations to gaiia's platform. Collaborating with teams to ensure accurate and timely data transitions.
Senior Data Architect/Strategist at Robots & Pencils blending advanced data knowledge with problem solving to drive intelligent products and smarter business decisions.
Principal Data Architect at PointClickCare ensuring coherent and scalable data architecture. Driving unified data direction while collaborating with Engineering Architecture team for AI enablement.