Product Security Senior Penetration Tester

Own Company

Own Company

Product

San Francisco, CA, USA

Posted on May 8, 2026

Description

Overview of the Role:

The AI Security team is a specialized group of cross-functional security engineers working at the intersection of offensive security and artificial intelligence. As a strategic partner to Salesforce's AI Research, Ethics, Engineering, and Cybersecurity Operations teams, we mitigate risk across all AI initiatives by developing novel security frameworks, specialized tooling, and foundational testing methodologies for both generative and predictive AI systems. We are building a robust foundation for AI security through adversarial datasets, in-house content generation systems, and external knowledge sharing — establishing Salesforce as a leader in the AI security field.

Responsibilities:

  • Lead adversarial testing by designing, scoping, and executing red team assessments across our AI ecosystem using a risk-based prioritization approach to discover and address vulnerabilities before they can be exploited.
  • Innovate in AI attack techniques by combining cutting-edge academic research with proven offensive security methods to establish new Tactics, Techniques, and Procedures, and operationalize emerging research to keep assessments aligned with the state of the art.
  • Build and scale security tooling using an automation-first philosophy, driving initiatives to shift security testing left by sharing purpose-built tools with AI security stakeholders across Engineering, Research, and Ethics.
  • Serve as a strategic partner across the company, providing an offensive security perspective to guide product development, support corporate governance, and contribute to policies such as Salesforce's Generative AI Security Standard.

Required Qualifications:

  • 6+ years of experience in offensive security (red teaming, application security, penetration testing, vulnerability research, etc.).
  • 1+ years of direct, hands-on experience testing the security of AI/ML systems, with a deep understanding of LLM vulnerabilities.
  • High degree of Python proficiency for tool development, assessment automation, and data analysis.
  • Proven experience leading complex technical projects and/or mentoring security teams, with exceptional ability to communicate high-stakes technical risks to both engineering and executive audiences.

Preferred Qualifications:

  • Advanced degree (MS or PhD) in a relevant field, or a public portfolio of security research including conference presentations, published papers, CVEs, or open-source contributions.
  • Experience creating or managing large-scale datasets for security testing or machine learning training.
  • Experience building automated testing frameworks or large-scale evaluation pipelines.
  • Familiarity with current AI safety research and frameworks like MITRE ATLAS and the OWASP Top 10 for LLMs.

For roles in San Francisco and Los Angeles: Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.