AI Red-Teamer — Adversarial AI Testing; English

11 March 2026 Hourly Remote English
Mercor
Apply on → Mercor
$50 – $111 per hour

Location: Remote-friendly (US time zones); Geography restricted to US, UK, Canada
Type: Full-time or Part-time

Why This Role Exists

At Mercor, we believe the safest AI is the one that’s already been attacked — by us. That’s why we’re building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes AI safer for our customers.

This role may include reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.

What You’ll Do

  • Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent

Document reproducibly: produce reports, datasets, and attack cases customers can act on

  • Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing

Who You Are

  • You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
  • You’re curious and adversarial: you instinctively push systems to breaking points
  • You’re structured: you use frameworks or benchmarks, not just random hacks
  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders
  • You’re adaptable: thrive on moving across projects and customers

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinfo probing, abuse analysis
  • Creative probing: psychology, acting, writing for unconventional adversarial thinking

What Success Looks Like

  • You uncover vulnerabilities automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Evaluation coverage expands: more scenarios tested, fewer surprises in production
  • Mercor customers trust the safety of their AI because you’ve already probed it like an adversary

Why Join Mercor

  • Build experience in human data-driven AI red-teaming at the frontier of safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy

The pay rate for this role may vary by project, customer, and content category. Compensation will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work for each engagement.

Compensation

  • Pay: $50 – $111/hour
  • Type: Hourly contract
  • Location: Remote

43 people hired recently for this role. 56 slots remaining.

Getting Started

New to Remote Gig Work?

No fluff, no theory. The First Month Playbook walks you through profile setup, landing your first client, and building a workflow that actually sticks.

Read the Playbook
New to Remote Gig Work?
Featured Platform

Apply to Mercor

Mercor matches you with AI and tech companies looking for remote talent. One application, multiple opportunities. Affiliate link — we may earn a commission.

Apply Now on Mercor
Apply to Mercor