Backend Infrastructure Engineer
Envoy AI
ABOUT ENVOY AI
Freight brokerages run on people making calls, chasing updates, and jumping between a dozen tools. We're building AI that does the actual work — not dashboards, not copilots, but an agent that handles the full job.
Our AI agent makes carrier calls, books loads, verifies credentials, sends check-ins, and chases documents. We're live with freight brokers and 3PLs moving real shipments today. The technical challenge: orchestrating thousands of concurrent voice conversations, executing complex workflows in real-time, and integrating with systems built decades before APIs existed.
THE ROLE
We're looking for a full-stack engineer who's excited about owning the infrastructure that powers AI-native logistics operations. You'll work alongside our CTO and founding senior engineers, playing a critical role in scaling our platform from seed stage to supporting hundreds of customers and thousands of concurrent AI agents.
This role combines full-stack development with infrastructure ownership. You'll build features across our entire stack while ensuring our systems are reliable, performant, and cost-effective as we scale. If you've been part of an infrastructure journey from 10 to 1000 customers, or from prototype to production-grade systems, we want to hear from you.
WHAT YOU WILL DO
Infrastructure & Scalability
- Own the reliability and performance of our production infrastructure as we scale
- Design and implement high-availability architecture
- Build observability into everything: define SLOs, implement intelligent alerting, and create dashboards
- Optimize our real-time voice pipeline to support thousands of concurrent AI-powered phone calls
Full-Stack Development
- Build end-to-end features across our FastAPI backend and SvelteKit frontend
- Contribute to our agentic platform that orchestrates complex multi-step logistics operations
- Create intuitive interfaces for workflow visualization and debugging
- Design APIs that scale
DevOps & Developer Experience
- Improve our CI/CD pipelines for faster, safer deployments
- Optimize Docker builds and Azure Container Apps scaling policies
- Enhance our IaC (Bicep) to support multi-region deployments
- Establish engineering practices in an AI first SDLC: better local development, testing utilities, deployment scripts
Ownership & Impact
- Take ownership of major technical initiatives from design to deployment
- Participate in architecture decisions that shape our platform's future
- Collaborate on code reviews, design reviews, and technical strategy
- Help establish engineering best practices as we grow the team
WHAT WE ARE LOOKING FOR
Required
- 4+ years of professional software engineering experience
- Experience scaling production systems; you've felt the pain of database bottlenecks, worked on performance optimization, or implemented caching strategies
- Strong fundamentals in system design, databases, and distributed systems
- Proficiency with Python or TypeScript (ideally both)
- Experience with cloud infrastructure (Azure, AWS, or GCP)
- Track record of deploying production systems with proper monitoring and logging
- Comfortable working across the stack—backend, frontend, infrastructure, and everything in between
- AI-native engineer: You use AI coding tools (Claude, Cursor, GitHub Copilot, etc.) daily and understand how to work effectively with LLM assistants
- Clear communication skills and ability to work autonomously
- Authorized to work in the US
Bonus Points
- Early-stage startup experience (founding engineer, first 10 employees)
- Built or scaled infrastructure that supports AI/LLM workloads
- Worked with real-time systems (WebSockets, voice processing, LiveKit)
- Experience with infrastructure as code (Terraform, Bicep, CloudFormation)
- Built agentic applications using LangChain, LangGraph, or similar frameworks
- Prior experience in logistics or supply chain technology
- Strong GitHub profile or open-source contributions
- Shipped features with LLM integrations in production
OUR STACK
Backend
- Python 3.12, FastAPI
- PostgreSQL
- Redis
AI/LLM
- LangChain & LangGraph for agentic workflows
- OpenAI, Anthropic, and open-source models
- Real-time voice AI (LiveKit)
Frontend
- SvelteKit with TypeScript
- Tailwind CSS, TanStack Query
- Cloudflare Workers
Infrastructure
- Microsoft Azure
- Docker, Azure Bicep (IaC)
- GitHub Actions for CI/CD
- Datadog + OpenTelemetry for observability
Integrations
- Communication platforms (Teams, Twilio, Sendgrid)
- Transportation management systems
- Real-time WebSocket updates
WHY JOIN ENVOY AI
Technical Challenge
- Build infrastructure that supports thousands of concurrent AI agents making phone calls, sending messages, and executing workflows
- Solve hard problems at the intersection of AI, real-time systems, and distributed computing
- Work with cutting-edge AI technology (LLMs, voice AI, agentic frameworks) in production
Ownership & Impact
- Own major technical initiatives from day one—your decisions will shape the platform
- Work directly with the CTO and founding engineers
- Build the foundation that scales to thousands of enterprise users
- See your code impact real logistics operations daily
AI-Native Team
- We embrace AI tools; everyone uses Codex, Claude Code, Cursor, and other AI assistants extensively
- You're encouraged to explore and share new best-practices and tooling
- We keep code quality and best-in-class engineering practices at the center of our AI-Native SDLC
Early-Stage Opportunity
- Significant equity and influence
- Help build engineering culture and practices from the ground up
- Grow into leadership roles as we scale the team
- Shape product direction and business strategy with your contributions