Description
Role Overview
Are you an engineer who loves solving hard problems, shipping real products, and growing fast? Join the Informatica IDMC team, a strategic pillar of Salesforce. We are pioneering the next generation of Agentic data integration and Serverless architecture—a mission-critical platform that processes petabytes of data for the world’s largest enterprises. This is a core backend engineering role with full end-to-end ownership. You will contribute production-grade code immediately, operating high-throughput, highly available services that form the foundation of our data cloud. You will be a builder, a learner, and a contributor to a platform that handles data at a scale most engineers never get to touch. We are seeking high-potential engineers eager to accelerate their careers within a culture that champions technical excellence, operational rigor, and rapid professional development.
Responsibilities
End-to-End Feature Ownership: Drive the design, development, testing, and deployment of well-scoped features within IDMC's Data Integration platform. Take full ownership of your work from initial requirement through to production deployment, monitoring, and operational health.
Backend Engineering & Quality Champion: Develop and maintain reliable, high-performance backend services in Java within a cloud-native microservices architecture. Champion code quality and maintainability by writing clean, well-tested, and peer-review ready code.
Data Integration Work: Contribute to building data pipelines, APIs, and integration workflows that move and transform data across cloud environments. Learn the fundamentals of large-scale data movement and develop expertise in this space over time.
Testing Discipline: Implement robust automated unit, integration, and regression tests as a first-class part of your development workflow. Actively contribute to high-quality standards by rigorously testing your own features and providing constructive feedback through code reviews.
DevOps & Operational Excellence: Actively engage in CI/CD pipelines, code reviews, and Agile processes. Apply best practices for deployment, monitoring, and effective incident response to maintain system reliability.
Collaborative Engineering: Work closely with senior engineers, LMTS, and Product Managers to understand requirements, ask sharp questions, and deliver solutions that align with the broader platform architecture.
Technical Specialization & Growth: Invest in continuous technical growth by exploring new tools, participating in critical design discussions, and developing deep expertise in a specific area of the platform (e.g., data movement, service performance, AI-assisted development).
Required Skills & Experience
Professional Experience: 2–4 years of full-time software development experience in a product or enterprise environment, preferably focused on building and maintaining cloud-native backend services.
Core Java & Backend Mastery: Deep hands-on experience with Java (or a similar JVM language). Strong understanding of object-oriented design, concurrent programming, and writing performance-critical, production-grade code.
Cloud-Native Architecture, APIs & Infrastructure: Proven experience designing, building, and operating scalable, high-throughput RESTful APIs within a cloud environment. Solid grasp of microservices architecture, service discovery, and message queue or event-driven patterns. Familiarity with at least one major cloud platform (AWS, Azure, or GCP) is essential.
Data Persistence & Query Optimization: Expert knowledge of RDBMS concepts, including advanced SQL writing, query optimization, and transaction management. Working experience with at least one NoSQL database (e.g., Cassandra, MongoDB) is preferred.
Containerization & Orchestration: Practical experience with Docker and basic familiarity with Kubernetes for service deployment and scaling.
Distributed Systems Exposure: Exposure to distributed processing technologies like Apache Spark or Kafka. Understanding of distributed systems fundamentals and data movement at scale.
Testing & Code Quality: Experience implementing robust automated tests (unit, integration, and contract tests) using frameworks like JUnit or TestNG. A strong commitment to quality, security, and maintainability.
DevOps Tooling & CI/CD: Hands-on proficiency with source control (Git), modern CI/CD pipelines, and familiarity with Agile/Scrum methodologies.
Problem Solving: Strong analytical thinking, attention to detail, and genuine curiosity when debugging or designing complex solutions.
Communication: Clear written and verbal communication — able to articulate technical decisions, ask sharp questions, and collaborate effectively across a distributed team.
Preferred Skills (Good to Have)
AI & LLM Proficiency: Knowledge of Generative AI and LLMs, with an ability to apply these technologies to create intelligent, automated solutions, along with familiarity with AI-assisted development tools (Claude Code, Cursor, or similar) and interest in how LLMs can be applied to automate engineering or data workflows.
Data Lakehouse: Awareness of Data Lake concepts, open table formats (Iceberg, Delta Lake), or data pipeline patterns.
Data Domain Awareness: Any exposure to data integration, data quality, or metadata management concepts — even through coursework or side projects.
Salesforce or SaaS Ecosystem: Familiarity with Salesforce products or experience integrating SaaS platforms is a bonus.
For roles in San Francisco and Los Angeles: Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.