Preparing high-quality instructional materials remains a labor-intensive process that often requires extensive coordination among teaching faculty, instructional designers, and teaching assistants.
In this work, we present Instructional Agents, a multi-agent large language model (LLM) framework designed to automate end-to-end course material generation, including syllabus creation, lecture scripts, LaTeX-based slides, and assessments. Unlike existing AI-assisted educational tools that focus on isolated tasks, Instructional Agents simulates role-based collaboration among educational agents to produce cohesive and pedagogically aligned content. The system operates in four modes: Autonomous, Catalog-Guided, Feedback-Guided, and Full Co-Pilot mode, enabling flexible control over the degree of human involvement.
We evaluate Instructional Agents across five university-level computer science courses and show that it produces high-quality instructional materials while significantly reducing development time and human workload. By supporting institutions with limited instructional design capacity, Instructional Agents provides a scalable and cost-effective framework to democratize access to high-quality education, particularly in underserved or resource-constrained settings. Our code is available at https://github.com/Hyan-Yao/instructional_gents/.
"From syllabus to exams, AI agents build it for you." As professors and instructors, you spend countless hours drafting syllabi, slides, and assessments. This process is necessary but often exhausting, and many of you have limited support from instructional designers or teaching assistants. Instructional Agents are here to help: automating routine tasks, aligning materials with your teaching goals, and giving you more time to focus on what matters most: teaching and mentoring your students.
They simulate collaboration among faculty, instructional designers, and teaching assistants, following the ADDIE framework to generate syllabi, slides, scripts, and assessments. You choose the mode—ranging from Autonomous Mode, where the system generates all materials with no human input, to Catalog-Guided Mode, where institutional data and prior feedback guide generation; Feedback-Guided Mode, where instructors review outputs and provide corrections for refinement; and Full Co-Pilot Mode, where the system pauses at each step to request real-time feedback. Together, these modes let you balance speed, quality, and human involvement, ensuring cohesive, high-quality course materials in less time.
Instructional Agents consistently reduce workload while preserving quality. In our evaluation across five computer science courses, we compared four operational modes (Autonomous, Catalog-Guided, Feedback-Guided, and Full Co-Pilot) and measured quality across six instructional materials (Learning Objectives, Syllabus, Assessments, Final Slides, Slide Scripts, and Instructional Package).
Human reviewers found that greater collaboration leads to higher quality. Full Co-Pilot mode achieved the best scores, while autonomous mode offered speed with lower effort. Across all modes, generated materials reached an acceptable quality level, showing that Instructional Agents can save time, cut cost, and support scalable course design.
We tested gpt-4o, gpt-4o-mini, and o1-preview as backends for Instructional Agents. All three models produced high-quality instructional materials across five courses, with no significant differences in reviewer scores.
GPT-4o-mini emerged as the best balance: it achieved quality on par with larger models while offering the lowest cost and fastest runtime. This makes it the default choice for scalable deployment.
Instructional Agents show clear trade-offs between speed, human effort, and cost. Autonomous Mode is the most efficient (2.23 hrs, $0.22, no human time), while Full Co-Pilot Mode requires the most resources (4.73 hrs, $0.36, 30 - 45 mins human time) but delivers the highest quality outputs. Catalog-Guided and Feedback-Guided modes offer balanced options, requiring 10 - 30 minutes of instructor involvement with moderate cost.
@misc{yao2025instructionalagentsllmagents,
title={Instructional Agents: LLM Agents on Automated Course Material Generation for Teaching Faculties},
author={Yao, Huaiyuan and Xu, Wanpeng and Turnau, Justin and Kellam, Nadia and Wei, Hua},
year={2025},
eprint={2508.19611},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.19611},
}