About Mercor
Mercor partners with leading AI labs and enterprises to train frontier models using human expertise. You will collaborate with top researchers and contribute to improving next-generation AI systems in your domain.
Role Overview
Mercor is seeking Cybersecurity Professionals with red teaming expertise to contribute to AI safety initiatives.
In this role, you will help evaluate and improve the robustness of large language models (LLMs) by simulating adversarial scenarios and identifying potential vulnerabilities.
This position is ideal for individuals with experience in cybersecurity and a strong understanding of how LLMs behave under adversarial conditions.
Key Responsibilities
- Conduct red teaming exercises on AI systems and language models
- Simulate cybersecurity attack scenarios to identify vulnerabilities
- Evaluate model responses for safety, robustness, and risk exposure
- Provide feedback to improve model defenses and reliability
- Contribute to AI safety and security testing workflows
Ideal Qualifications
- Professional or academic background in cybersecurity
- Experience using LLMs for red teaming or adversarial testing
- Strong analytical and problem-solving skills
- Attention to detail and ability to identify subtle vulnerabilities
Required Certification
- Completion of Mercor’s Red Team Academy certification
Work Details
- Fully remote and flexible
- Project-based engagement focused on AI safety and testing
Contract & Payment Terms
- Independent contractor engagement
- Flexible schedule — work on your own time
- Weekly payments via Stripe or Wise
- Projects may be extended, shortened, or concluded early based on performance and business needs
- No access to confidential or proprietary external data required
Additional Notes
- We consider all qualified applicants without regard to legally protected characteristics
- Reasonable accommodations are available upon request
- Unable to support H1-B or STEM OPT candidates at this time