AI Engineer (R&D)

LottieFiles

LottieFiles

Software Engineering, Data Science

Posted on May 13, 2026

About the Role


LottieFiles is building next-generation AI-powered creative tools for animation, design, and interactive media
workflows. We are looking for an AI Engineer to help develop specialized generation, editing, evaluation, and
optimization systems for creative and structured content workflows. This role focuses on structured generation,
domain-specific model adaptation, evaluation systems, feedback pipelines, and production AI infrastructure. You will
work closely with engineering, design, and product teams to improve generation quality, reliability, efficiency, and
usability across AI-assisted creative workflows.


What You’ll Work On
• Natural-language-to-structured-content generation workflows.
• Structure-preserving editing and modification systems.
• Validation and repair pipelines for generated outputs.
• Evaluation systems for quality, correctness, consistency, and runtime performance.
• Training and evaluation datasets built from production usage and interaction traces.
• Smaller, lower-latency models for targeted generation, editing, routing, and repair tasks.


Key Responsibilities
• Design and execute fine-tuning strategies for structured generation and editing workflows.
• Build supervised datasets from successful generations, retries, failures, and user edits.
• Develop measurable benchmarks for generation quality, correctness, and edit preservation.
• Experiment with open-source models such as Llama, Qwen, Mistral, DeepSeek, or related architectures.
• Implement LoRA, QLoRA, supervised fine-tuning (SFT), distillation, preference tuning, or synthetic data
approaches where appropriate.
• Build automated pipelines for collecting, cleaning, evaluating, and promoting production data into training
datasets.
• Use validation systems, intermediate representations, runtime analysis, and rendered outputs as structured
feedback signals for models.
• Improve retry, repair, and self-correction workflows for generation pipelines.
• Collaborate with engineering and product teams to improve model reliability and output quality.


Required Qualifications
• Strong experience building with LLMs or structured generation systems in production or applied research
settings.
• Hands-on experience fine-tuning or adapting open-source language models.
• Strong Python engineering skills.
• Experience building evaluation systems, ML experimentation workflows, or data pipelines.
• Strong understanding of prompt engineering, structured outputs, tool use, and model failure analysis.
• Ability to define measurable evaluation criteria rather than relying only on subjective review.
• Comfort debugging systems spanning model outputs, validation systems, runtime behavior, and rendered
results.
• Strong communication and collaboration skills.


Preferred Qualifications
• Experience with code generation, DSL generation, or compiler-aware AI systems.
• Experience with LoRA, QLoRA, SFT, preference tuning, distillation, or synthetic data generation.
• Familiarity with animation systems, graphics pipelines, design tools, SVG, WebGL, shaders, or procedural
graphics.
• Experience with multimodal or visual-language-model evaluation workflows.
• Experience with observability or ML evaluation tooling such as Weights & Biases, Langfuse, MLflow, or
OpenTelemetry.
• Experience building agentic systems, orchestration pipelines, or multi-step generation workflows.
• Familiarity with ASTs, intermediate representations (IRs), or structured program representations.