Intermediate Software Engineer – Artificial Intelligence, AI

Posted 4 days ago

Apply Now

Resume Score

Check how well your resume matches this job before you apply.

Sign in to check score

About the role

  • Intermediate Software Engineer specializing in AI to build innovative domain services features for Tucows Domains. Collaborating with engineers to develop AI-powered solutions.

Responsibilities

  • Design and build AI-driven features for our domain services platform using Python and Golang.
  • Integrate and fine-tune open-source models such as LLaMA 3.2 and similar cutting-edge architectures via tools like Ollama.
  • Research, evaluate, and implement emerging AI technologies that align with our vision for smarter, more intuitive products and services.
  • Collaborate with internal stakeholders and fellow engineers to rapidly prototype and iterate on machine learning and LLM-based features.
  • Contribute to a modern AI development stack, ensuring scalability, performance, and ethical usage of models.
  • Actively participate in the open-source ecosystem and bring relevant tools and techniques back to the team.

Requirements

  • Bachelor’s degree in Software Engineering, Computer Science, or a related field
  • 3+ years of professional software engineering experience in production environments
  • Strong proficiency in Python and Golang
  • Solid foundation in software design principles, patterns, and service-oriented architecture
  • Experience contributing to scalable systems and component-level architecture
  • Ability to design and build RESTful APIs for model serving and AI-enabled workflows
  • Working knowledge of relational/SQL databases (preferably PostgreSQL) and data modeling for AI use cases
  • Strong understanding of modern LLM concepts, including transformer architectures and attention mechanisms
  • Hands-on experience adapting and deploying open-source models (e.g., LLaMA, Mistral, Mixtral) using tools like Ollama or Hugging Face Transformers
  • Experience with fine-tuning techniques (e.g., LoRA, QLoRA, PEFT) for domain-specific adaptation
  • Proficiency in prompt engineering (few-shot, chain-of-thought, structured outputs)
  • Familiarity with model serving patterns for efficient, scalable inference
  • Experience designing and implementing Retrieval-Augmented Generation (RAG) pipelines end-to-end
  • Hands-on experience with vector databases (e.g., pgvector, Pinecone, Weaviate)
  • Familiarity with embedding models, chunking strategies, and semantic search patterns
  • Understanding of data pipelines for ingestion, transformation, and inference result storage
  • Familiarity with Model Context Protocol (MCP) server design patterns
  • Experience with agent orchestration frameworks (e.g., LangChain, LangGraph)
  • Understanding of tool use, function calling, and multi-step reasoning in LLM workflows
  • Experience with LLM evaluation frameworks (e.g., RAGAS, promptfoo, or custom pipelines)
  • Familiarity with observability and tracing tools (e.g., LangSmith, Helicone)
  • Comfort with structured logging, metrics, and alerting for AI workloads
  • Experience with containerization and cloud-native deployment (preferably AWS)
  • Familiarity with Kubernetes or EKS for scaling model-serving workloads
  • Understanding of GPU considerations for inference (quantization, batching, memory trade-offs)
  • Active interest in the open-source AI ecosystem
  • Strong collaboration and communication skills across technical and business teams
  • Enthusiasm for emerging AI technologies with a practical, delivery-focused mindset.

Benefits

  • Health insurance
  • Professional development
  • Flexible work arrangements
  • Paid time off
  • Generous compensation

Job type

Full Time

Experience level

Mid levelSenior

Salary

CA$100,350 - CA$111,500 per year

Degree requirement

Bachelor's Degree

Tech skills

AWSCloudKubernetesPostgresPythonSQLGo

Location requirements

HybridTorontoCanada

Report this job

Found something wrong with the page? Please let us know by submitting a report below.