Description
Salesforce Data Cloud powers the intelligence layer of Customer 360, bringing together data from every system, channel, and interaction to build a unified, real-time customer profile. The Fabric engineering team builds the foundational compute platform that enables scalable, reliable, and AI-driven data processing at massive scale.
We are looking for a visionary and deeply technical Director of Engineering with extensive experience in Big Data and Distributed Compute frameworks such as Apache Spark, Flink, and Ray. In this critical leadership role, you will own the strategy, delivery, and operational excellence of the foundational compute systems that power next-generation data processing, AI workloads, and real-time analytics across billions of records.
Your Impact:
As the leader of the Fabric Compute team, you will:
Define and execute the technical roadmap for scalable, high-performance distributed compute systems leveraging Spark, Flink, and Ray.
Direct the architecture and design of data processing frameworks that power mission-critical batch, streaming, and AI inference workloads in Salesforce Data Cloud.
Lead a team of engineers, fostering a culture of technical excellence, mentorship, and accountability for end-to-end delivery, performance, and scalability of the Compute Platform services.
Collaborate strategically with product management, architecture, and cross-functional engineering teams to align on priorities and deliver key functionalities.
Drive innovation in areas such as real-time data streaming, low-latency processing, vectorized computation, and cost efficiency.
Champion reliability, cost governance, and advanced observability for distributed data workloads running on Kubernetes (EKS) at petabyte scale.
Influence the broader Data Cloud organization on best practices for distributed systems and data engineering.
What You’ll Bring:
12+ years of progressive experience in software engineering, with a minimum of 5+ years managing and leading high-performing distributed systems or Big Data engineering teams.
Deep executive-level expertise in architecting, performance tuning, and operating large-scale distributed data systems and compute frameworks like Apache Spark, Flink, or Ray in production.
Exceptional programming skills in Java, Scala, or Python.
Comprehensive understanding of distributed computing concepts (task scheduling, checkpointing, fault tolerance, state management) and their strategic application.
Proven experience deploying, optimizing, and governing big data workloads on modern container platforms, specifically Kubernetes (EKS).
Hands-on experience with data formats (Parquet, ORC, Delta) and advanced streaming frameworks.
Demonstrated ability to make complex trade-offs involving scale, cost, reliability, and time-to-market.
Familiarity with modern observability and DevOps tools (Prometheus, Grafana, OpenTelemetry).
Strong communication, collaboration, and executive presence with a track record of attracting, developing, and retaining top engineering talent.
A related technical degree required.
Why Join Us:
At Salesforce, we’re building the future of enterprise data and AI. As a Director, you will be a key driver in redefining how data is processed, analyzed, and activated at scale. Your leadership and technical decisions will directly power AI, personalization, and real-time intelligence across the Salesforce ecosystem, impacting millions of users and customers globally. If you’re passionate about distributed systems, cutting-edge data technology, and leading teams to solve the world's most complex scale problems — we invite you to lead the charge.
In office expectations are 10 days/a quarter to support customers and/or collaborate with their teams.