Role: AI Engineering Manager/Staff Engineer (Python / LLMs / Infrastructure)Location: Fully Remote (Europe)Salary: €110k-€135kEmployment Type: Full-time*Please note: Only candidates with Staff-level experience or above will be considered. Proven team leadership - whether as a Tech Lead, Staff Engineer, or Engineering Manager - is a core requirement for this role.*Join a fast-growing product company at the cutting edge of AI technology. This is an opportunity to lead a talented, cross-functional engineering team while staying hands-on with a modern, high-performance tech stack. Our client’s mission is to build one of the most human-like AI platforms in the world, with millions of users and a strong reputation across academia and media.We're seeking an AI Engineering Manager or Staff Engineer who combines strong backend engineering and infrastructure skills with proven leadership experience. You’ll play a pivotal role in scaling production AI systems, guiding technical direction, and helping drive delivery of sophisticated, LLM-powered solutions.Key Responsibilities:Lead and mentor a high-performing team of AI and backend engineersOwn and evolve the system architecture for AI/ML deployment at scaleBuild and maintain FastAPI-based microservices with Python async patternsManage AI-related infrastructure: containerization (Docker), CI/CD (GitHub Actions), observability (Datadog)Design and support scalable data pipelines using Redis, MongoDB, and KafkaIntegrate with LLMs (OpenAI, Anthropic, LLaMA) and vector databases (e.g. Pinecone)Oversee structured logging and system monitoringCollaborate cross-functionally across AI research, DevOps, and product teamsSupport a robust, high-scale environment (serving 500K+ users)Help shape best practices in software engineering and team cultureCore Requirements:5+ years of backend development experience in PythonLeadership background: experience managing engineering teams or squadsDeep knowledge of Redis (asyncio), MongoDB schema design, and FastAPIHands-on experience with LLM APIs (OpenAI, Anthropic, etc.) and vector databases (e.g. Pinecone)Familiarity with LLaMA models and deployment patternsProficient in Docker and docker-compose for environment managementSolid experience with Kafka in event-driven architecturesExpertise in CI/CD with GitHub Actions and observability tooling (e.g., Datadog)Track record of shipping systems at scale (500K+ users or more)Excellent communication skills and stakeholder collaboration abilitiesA “startup mindset”: proactive, adaptable, and comfortable with ambiguityNice to Have:Experience with Kubernetes and deployment orchestration tools (e.g. Quadrant)Scala familiarity or willingness to learnPrevious work in AI/ML product teams or research-led environments
Jonathan Harrold