About Mercor
Mercor partners with leading AI labs and enterprises to train frontier models using human expertise. You will collaborate with top researchers and contribute to improving next-generation AI systems in your domain.
Role Overview
Mercor is seeking individuals interested in cybersecurity to contribute to AI safety initiatives through red teaming activities.
In this role, you will help test and improve the robustness of large language models (LLMs) by exploring how they respond to adversarial prompts and edge cases.
This opportunity is ideal for motivated individuals without formal cybersecurity experience who are comfortable working with AI systems and eager to contribute to improving their safety and reliability.
Key Responsibilities
- Perform red teaming tasks on AI systems and language models
- Explore and test model behavior using adversarial prompts
- Identify weaknesses, inconsistencies, or unsafe outputs
- Provide feedback to improve model safety and robustness
- Contribute to AI evaluation and testing workflows
Ideal Qualifications
- Interest in cybersecurity (no professional experience required)
- Familiarity with using LLMs (e.g., prompt experimentation, “vibe coding”)
- Strong curiosity and analytical thinking
- Attention to detail and ability to identify unusual behaviors
Required Certification
- Completion of Mercor’s Red Team Academy certification
Work Details
- Fully remote and flexible
- Project-based engagement focused on AI safety testing
Contract & Payment Terms
- Independent contractor engagement
- Flexible schedule — work on your own time
- Weekly payments via Stripe or Wise
- Projects may be extended, shortened, or concluded early based on performance and business needs
- No access to confidential or proprietary external data required
Additional Notes
- We consider all qualified applicants without regard to legally protected characteristics
- Reasonable accommodations are available upon request
- Unable to support H1-B or STEM OPT candidates at this time