Grade exam answers with GPT‐4.1 mini, plagiarism checks, Slack alerts and Google Sheets logging
Workflow preview
DISCOUNT 20%
Overview
How It Works
This workflow streamlines academic assessment through a multi-agent AI system that interprets rubrics, grades submissions, checks for plagiarism, performs quality moderation, generates feedback, and escalates borderline cases. Designed for educators and assessment administrators, it reduces inconsistencies in manual marking while embedding integrity checks into every evaluation cycle. A manual trigger retrieves student answers and rubrics, which are first structured before being sent to a Primary Marker Agent. If integrity concerns arise, a Plagiarism Analysis Agent runs in parallel. Results are consolidated and reviewed by a Quality Moderator Agent, followed by a Feedback Generator. Borderline cases are routed to a Secondary Marker Agent, while approved outcomes proceed to escalation preparation, Slack notifications, statistics computation, final consolidation, and logging in Google Sheets.
Setup Steps
- Configure manual trigger and connect student answer and rubric data sources.
- Add OpenAI API credentials to all OpenAI Chat Model nodes.
- Define moderation thresholds in the Route by Moderation Decision rules node.
- Configure Slack credentials and set escalation alert channel.
- Set plagiarism sensitivity thresholds in the Plagiarism Analyser Agent node.
Prerequisites
- Google Sheets with service account credentials
- Student answer and rubric data source (API or spreadsheet)
Use Cases
- Automated essay and short-answer marking for university assessments
Customization
- Replace OpenAI with Anthropic Claude for marking and moderation agents
Benefits
- Automates end-to-end marking with built-in plagiarism and moderation checks