Data Engineer architecting and building cloud-based systems for Semios Group, an agricultural technology company. Responsibilities include managing data interfaces, scalable infrastructure, and delivering actionable insights.
Responsibilities
Architect and build Cloud based systems to manage and improve the interface between Semios data and its consumers.
Design, develop and maintain scalable infrastructure to process and store data, integrate data driven models and automate manual processes.
Implement highly scalable big data analytics systems in a cloud environment.
Design and build reliable, monitorable and fault-tolerant data systems & data processes.
Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Continuously identify bottlenecks in the data stack and optimize queries and processes for cost and performance.
Write clear documentation of data processes and products.
Requirements
Advanced skills in SQL; how to write elegant queries; written for humans first, machines second.
The ability to thrive both autonomously and in a team environment.
Hands-on experience with provisioning and developing on cloud platforms (familiarity with GCP is a definite plus).
Experience with at least one Data Warehouse (BigQuery, RedShift, Snowflake, On-Prem).
Excellent verbal & written communication skills: a talent to distill complex ideas to different audiences.
An in-depth experience with Big Data. A proven track-record of effective collection, storage, and access.
Proven experience with workflow and scheduling tools (e.g., like Prefect, Airflow, Dagster, Kubeflow, etc.) and version control (Git).
A fluency in Python, Node or other imperative language or ability to learn quickly and with enthusiasm.
Excellent troubleshooting skills to rapidly identify and resolve issues.
**Nice to have:**
Significant exposure to at least one relational database (Postgres, MySQL).
Real world experience with containers (Docker) & container management systems (Kubernetes).
Experience or Interest in working with IoT Cloud and IoT data.
Familiarity with data transformation tools (dbt, SQLMesh, Dataform) and syncing tools (e.g., dlt, Fivetran, Airbyte).
Salesforce Data Architect designing and optimizing enterprise - grade data architectures across Salesforce platforms. Collaborating with team members to ensure data quality and readiness for analytics.
Senior Data Engineer with a strong background in Google Cloud services at Valtech. Leading data engineering projects and developing highly available data pipelines.
Sr. Databricks Spark Developer role designing and optimizing data pipelines for banking. Requires Databricks/Spark experience in financial services with strong communication skills.
Data Integration Developer for market risk systems. Responsible for ETL/ELT development, SQL database programming, and supporting risk management systems in a hybrid Mississauga contract role.
Azure & Databricks Data Engineer role designing and building data pipelines using Microsoft tech stack. 11 - month contract, hybrid work in Oshawa, $90 - 95/hr.
Data Engineering Developer responsible for designing and implementing data flows using cloud technologies like AWS and Databricks. Collaborating within a strong data science team to optimize data for machine learning.
Sr. Manager leading data engineering team to optimize data infrastructure for insurance. Driving innovative data solutions and managing cross - functional collaborations within a remote setup.