About Mercor
Mercor partners with leading AI labs and enterprises to train frontier models using human expertise.
You will collaborate with top researchers and contribute to improving next-generation AI systems through structured evaluation and high-quality human feedback.
Role Overview
Mercor is seeking generalist writers with experience in rubric-based evaluation to support a short-term AI training initiative.
In this role, you will evaluate AI-generated content and contribute to building structured evaluation frameworks that improve how AI systems assess quality, clarity, and nuance in writing.
Key Responsibilities
- Evaluate AI-generated conversations using predefined rubrics
- Draft and refine evaluation rubrics for writing tasks
- Apply structured scoring criteria (clarity, coherence, accuracy, style)
- Review ambiguous or edge-case outputs for consistency
- Ensure high standards in grading and evaluation processes
Ideal Qualifications
- Experience with rubric-based grading or content evaluation
- Strong generalist writing ability across multiple domains
- Excellent attention to detail and consistency
- Ability to interpret nuanced criteria and make sound judgments
- Background in English, journalism, communications, or related fields
Work Details
- Work Type: Fully remote
- Engagement: Contractor (project-based)
- Flexible schedule with asynchronous work
Compensation & Terms
- Pay Range: $25 – $45/hour (based on experience and location)
- Opportunities for expanded responsibilities (reviewer / lead roles)
- Weekly payments via Stripe or Wise
- Projects may be extended or concluded based on performance and needs
- No access to confidential third-party data required
Application Process
- Submit a resume or writing sample
- Selected applicants will be contacted within a few days
Additional Notes
- Equal opportunity employer with accommodations available upon request
- Unable to support H1-B or STEM OPT candidates at this time
- Ideal for writers interested in AI evaluation and content quality systems