Description
About the Team
At Slack, data isn't just infrastructure — it's the engine behind every great enterprise decision. Our Data Engineering team builds the products and pipelines that power our largest customers, delivering the metrics and insights that drive how they work. We operate at massive scale, transforming billions of records into actionable intelligence, and we're looking for talented engineers to help us do it even better.
As a Data Engineer on the Enterprise team, you'll partner cross-functionally with business stakeholders, analytics teams, and backend engineers to design, build, and scale both batch and real-time data pipelines. You'll contribute directly to initiatives that support key decision-makers within our enterprise customer base — helping them understand adoption, engagement, and the health of their Slack deployments.
You'll also be a core contributor to GovSlack, our cross-team initiative to establish feature parity for U.S. government agencies and highly regulated organizations. This is high-visibility, high-impact work that matters beyond the product — it supports national infrastructure.
We're looking for passionate, detail-oriented engineers who are excited about building a rock-solid data foundation and making a real impact at one of the world's most widely used collaboration platforms.
What You'll Be Doing
Translate complex business requirements into scalable, reusable data models that are easy to understand and adopted across subject areas
Design, implement, and maintain data pipelines that deliver high-quality data under defined SLAs
Partner with product, analytics, and engineering teams to build trusted, well-documented foundational datasets aligned with business strategy
Champion data strategy across multiple teams and use cases, driving consistency and reliability at scale
Expand access to core company metrics through strong technical and process foundations
Identify, document, and promote data engineering best practices across Slack
What You Should Have:
5+ years of experience in data architecture, data modeling, master data management, or metadata management
Hands-on expertise with relational modeling approaches — including columnar, star schema, snowflake, and dimensional modeling
Proven track record optimizing schemas, tuning SQL, and scaling ETL pipelines in OLAP and data warehouse environments
Experience with Airflow or a comparable workflow orchestration platform for data pipeline management
Strong proficiency in Python and SQL
Familiarity with data governance frameworks, SDLC, and Agile methodologies
Excellent written and verbal communication skills with the ability to partner effectively across technical and business teams
U.S. citizen or lawful permanent resident, willing to undergo a background check for GovSlack authorization
Bonus Points
Hands-on experience with Spark SQL, AWS (S3, EMR), Apache Pinot, or Hadoop/Big Data ecosystems
Experience with Flink or other real-time stream processing frameworks
Proficiency in Scala
Familiarity with cloud platforms (AWS, GCP, or Azure)
Experience with NoSQL data stores
For roles in San Francisco and Los Angeles: Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.