Data Engineer developing and maintaining data pipelines leveraging analytics for Samsara’s Data Platform. Working collaboratively to integrate diverse data sources and ensure data-driven decision-making.
Responsibilities
Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara’s Data Platform to enable advanced automation and analytics.
Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
Manage critical data pipelines to enable our growth initiatives and advanced analytics.
Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
Champion, role model, and embed Samsara’s cultural principles as we scale globally and across new offices.
Requirements
A Bachelor’s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
3+ years of experience in data engineering, ETL development, or database architecture.
3+ years of experience in building/maintaining a large-scale production-grade end-to-end data pipelines, including Data Modeling.
Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
Experience with leading end-to-end projects, including being the central point of contact to stakeholders.
Engage directly with internal cross-functional stakeholders to understand their data needs and design scalable solutions.
3+ years in Python, SQL.
Exposure to ETL tools such as Fivetran, DBT or equivalent.
Exposure to python based API frameworks for data pipelines.
RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
Cloud: AWS, Azure and/or GCP.
Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.
Hiring AI Engineer with 8 - 10 years experience in AI/ML, LLMs, Python, and MLOps. Must have expertise in Generative AI, RAG pipelines, and cloud deployment.
Senior AI Engineer at SecurityScorecard designing customer - facing AI features using TypeScript, Go, and AWS. Responsible for integrating AI capabilities into security solutions and collaborating with cross - functional teams.
Lead AI product initiatives and technical architecture as a Staff AI Engineer at SecurityScorecard. Drive engineering standards and deliver AI - powered solutions in a reputable cybersecurity company.
Senior Machine Learning Engineer at TheAppLabb designing and deploying production - grade ML systems with a focus on AI features and cloud infrastructure.
Lead AI platform architecture across Optical, IP, and Fixed Networks in Network Infrastructure sector. Driving automation and intelligence in network autonomy with extensive experience in AI systems.
AI Engineer designing and optimizing AI solutions focusing on LLMs within a hybrid team at KPI. Leading development and deployment of AI - driven strategies using Azure.
Mainframe Developer with strong COBOL, PL1, DB2, VSAM, CICS, JCL, and EASYPLUS skills. Analyze business requirements and translate into technical design and code.
AI Developer Intern leading development and implementation of AI tutors for Universa AI Academy curriculum. Responsible for creating AI - powered learning experiences without human instructors.