Irish Journal of Technology Enhanced Learning, Vol 7, Issue 2
Special Issue: The Games People Play: Exploring Technology Enhanced
Learning Scholarship & Generative Artificial Intelligence
https://doi.org/10.22554/ijtel.v7i2.130
Apostolos Koutropoulos*
University of Massachusetts Boston
In this article I explore the scenario of syllabus creation using ChatGPT as a type of Assistant Course Designer. During the early parts of 2023, one of the scenarios for LLM-based technologies, such as ChatGPT, was the creation of course materials, assignments, and syllabi for courses. The use case presented at Teaching and Learning Conferences during this time always included an adjunct instructor who was assigned a course at the last minute; or an early career academic who might have not had previous experience in syllabus creation. The scenarios always hinged upon either a lack of time to prepare, or lack of prior work that a user could adapt to the current scenario, or both. While ChatGPT passes a surface-level inspection for this scenario, a deeper analysis of the output highlights the problems with relying on such a tool for this job.
This paper takes a brief peek into the use of Artificial Intelligence (AI), as popularized by tools and services such as ChatGPT, to examine the current efficacy of such tools when used as a type of Assistant Course Designer to faculty and instructional designers. As it currently stands, there is a mixed reaction to AI in Education (AIEd). There are both bright and not-so-bright AIEd futures imagined (Bozkurt et al., 2023; Haven, 2023). There is criticism of the human toll associated with the training of Large Language Models (LLM) that underlie AI (Rowe, 2023), the environmental costs of this kind of computing (Dhar, 2023), the emerging concerns over copyright (Appel et al., 2023) and creativity (Brittain, 2023); as well as the toll that false accusations can take on students when there’s an industry treating every student as a potential cheater (Klee, 2023). One important aspect of the zeitgeist is that AI is presented as an inevitability (Bearman et al., 2023) when it is not. We as users have agency that is being denied to us through the use of this rhetoric. This inadvertent denial of agency could be part of the kinds of harm that are alluded to by McQuillan et al. (2023). Even though there are predictions that AI will change education, not destroy it (Haven, 2023), AI still appears like a novel solution seeking a yet undiscovered problem to solve.
Over the course of 2023, many AI-based tools have entered the consciousness of both learners and academics bringing to the fore both a kind of existential dread and a kind of joy for the potential eradication of drudgery in work. On the existential dread side of things, there are questions as to whether or not these kinds of technologies will make certain jobs obsolete, and whether students will use them to cheat in some fashion. On the more joyous side, both students and academics may be looking for ways to automate aspects of the work that they don’t like or seem to take too long, such as discovering new academic publications, and perhaps even producing quick summaries of articles for class, a more modern-day version of Cliff’s Notes.
In my day-to-day activities, I wear multiple hats, and different AI tools are applicable depending on the role: I manage an online master’s program where tools such as Grammar.ly aids with writing and editing, or tools like Otter.ai can produce transcripts for recorded lectures. I also teach courses for graduate students and faculty, where learners are curious as to how they might be able to use such tools to increase their productivity. Exemplars in this category are tools such as ChatPDF, Scholarcy, Quillbot, and ResearchRabbit. Finally, I am an instructional designer and this is the primary lens through which I view AI for this paper.
When ChatGPT, and AI in general, gained notoriety between March and July 2023, one of the popular use cases presented at teaching and learning conferences was the creation of a course syllabus. More specifically, the use case involves faculty, who may be precarious faculty, who have been informed at the eleventh hour that they are teaching a course and need to produce a course syllabus and associated course materials. Putting aside the organizational and ethical issues around such a use case, I accept the premise for the purposes of this thought experiment.
Initially, the tool that was demonstrated at these conference sessions was ChatGPT (version 3.5) but since then additional tools such as smartie.dev and gamma.ai have become available for use in similar contexts. According to these conference presentations, instructional faculty could use AI for tasks such as:
Using these suggestions as my initial ideas for prompts, I decided to try this out for a class that I am familiar with. Since 2012 I’ve been teaching a course titled “The Design and Instruction of Online Courses” for a master’s program. I chose a course that I am familiar with because it would be easier to critique the output of an AI tool. This series of prompts was completed in June 2023 using the free version of ChatGPT 3.5. I chose ChatGPT as the tool because it was the tool most promoted at conferences, and it was a tool that allowed for prompt modifications and clarifications after something had been generated by the system. I started with the following prompt and then successively refined the provided output:
Please create a course outline for a graduate course titled "Design and Instruction of Online Courses" that has the following course description "This course is for instructional designers, instructors, teachers, or trainers who want to explore the critical success factors in designing and delivering online instruction. Through readings, discussion, and various activities, students will examine the pedagogical implications of technology-mediated learning, the dynamics of the virtual classroom, the elements of effective online course design, as well as some of the tools and technologies available to create and deliver online instruction. Through group-based and individual project work, students will design and create online modules. This course will use a range of interactive and collaborative instructional methods.
My starting prompt was the course description as it was listed in the University’s Graduate Catalog. Over a series of 25 prompts (Koutropoulos, 2023a), I arrived at a synthetic syllabus for my course (Koutropoulos, 2023b). It’s worthwhile noting that I went beyond the suggested prompts for creating a synthetic syllabus. I took on the persona that aligned with those presented at conferences, that of someone who is new to teaching, someone like an early career academic, who might not have developed course policies yet. With this in mind, I asked ChatGPT to also create course policies for me. Because of the field that I focused on, there are professional standards with which a course can align. Common examples include the IBSTPI standards (IBSTPI, 2012), ISPI standards (ISPI, n.d.), and ISTE standards (ISTE, 2017).
The synthetic syllabus produced (Koutropoulos, 2023b) is a 28-page single-space document that took about 90 minutes to produce. This time included both the prompting chains in ChatGPT (Koutropoulos, 2023a), as well as the copy, pasting, and minor formatting in GoogleDocs. I’ve adopted the use of the word synthetic to denote text, image, audio, or other multimedia that is produced by a large data model (Bender et al., 2021).
The section below contains excerpts of the full syllabus, the entire version of which can be viewed online (Koutropoulos, 2023b). The entire prompt history is available online (Koutropoulos, 2023a), but it has been edited here to conform to the article's space limitations.
Course Outline (14 Weeks):
Week 1: Introduction to Online Instruction
Week 2: Virtual Classroom Dynamics
Week 3: Elements of Effective Online Course Design
Week 4: Multimedia and Interactive Elements in Online Courses
Week 5: Tools and Technologies for Online Instruction
Week 6: Assessment and Feedback in Online Courses
Week 7: Designing and Creating Online Modules
Week 8: Interactive and Collaborative Instructional Methods
Week 9: Quality Assurance and Course Evaluation
Week 10: Accessibility and Inclusivity in Online Instruction
Week 11: Learner Support and Engagement Strategies
Week 12: Copyright and Intellectual Property in Online Courses
Week 13: Emerging Trends in Online Instruction
Week 14: Final Project Presentations and Reflection
What are the prerequisite skills and knowledge for this course?
The course "Design and Instruction of Online Courses" assumes that students entering the course have a certain set of prerequisite skills and knowledge. Here are some examples of the prerequisite skills and knowledge that would be beneficial for students taking this course:
What are the course objectives for this course?
Course Objectives for "Design and Instruction of Online Courses":
By the end of the course, students should have gained a comprehensive understanding of the critical success factors in designing and delivering online instruction and be able to apply their knowledge and skills to develop effective and engaging online courses.
Please create four major assignments for this course
Here are four major assignments for the "Design and Instruction of Online Courses" course:
Can you please create activities for each week of this course?
Here are suggested activities for each week of the "Design and Instruction of Online Courses" course:
Please provide me with course policies for this course
Course Policies:
When I started this process, I was interested in how long this process would take to complete. I kept an eye on my computer’s clock as I was working to get a rough estimate. The creation of this synthetic syllabus took about 90 minutes, from prompt to a basic formatted GoogleDoc. It’s important to mention that, while I tried to embody the persona of an adjunct instructor who was just hired and is teaching this course at the eleventh hour, that sense of urgency or panic that someone might experience in this situation was not present. Furthermore, I already had most of the important sections of the syllabus in mind; I already knew the course description of the course so there was no wasted time trying to find that, and I had some idea of the kinds of elements that I wanted to include in my syllabus. In the past, assuming subject-matter expertise, the creation of a typical course syllabus would take the same amount of time.
One of my first observations is that ChatGPT defaulted to an 8-week course schedule, whereas my institution operates on a standard semester which is approximately 14 weeks. This is a curious assumption, and due to the inability of an LLM to exhibit communicative intent (Bender & Koller, 2020) it is unknown whether course was an intensive 8-week semester course. ChatGPT converted that 8-week course schedule to a 14-week course schedule. When comparing the two schedules, it appears that the 14-week course is the 8-week course, for the most part, with some additional content to round out the additional weeks.
The course prerequisites generated were not all that flawed. However, there were two major issues. First, they lacked the specificity of how this course fits into an MEd program curriculum. In most curricula, there is a connection across courses that determines what is included in a course and how subsequent courses connect to what has come before. If this course were a one-off, disconnected, professional development course the generated objectives may have been passable for the scenario presented, however, they don’t compare favourably to course objectives that were intentionally designed. The second issue is that the course objectives list seemed rather lengthy and certain outputs weren’t really well-formed objectives. For instance, the objective that states “collaborate with classmates to explore innovative approaches to online instruction” seems more like one particular approach to exploring innovative approaches to online instruction, but the collaboration component may not be the preferred pedagogical approach in that instance. Ultimately, since we don’t know where the training data came from for this ChatGPT (The Radical AI Podcast, 2023), and since LLMs take data from past examples and reproduce them (GRAILE AI, 2023) we neither know if the training data is any good (The Radical AI Podcast, 2023), nor if it was polluted by bad data (GRAILE AI, 2023). In this case, polluted data may be course objectives in the LLM training corpus that aren’t really course objectives.
The required textbooks section seemed a bit like a scattershot. My observation is that there is most likely a fair overlap between these texts when it comes to the content of these texts. The total cost of textbooks in this synthetic syllabus would be $246 USD. While this cost may not be exorbitant, it might be unnecessary. In addition to the issue of cost, one of the textbooks, while real, had authorship misattribution. Another issue is that when I prompted ChatGPT to provide a bibliography for the weekly readings it mostly ignored the required texts for the course and it provided me with different texts each week. Furthermore, it assigned full books for each week which makes the course feel like it’s suffering from the course-and-a-half syndrome (Kaleta et al., 2005). Some texts in the weekly bibliography breakdown of the course also appear to be made up, which is to be expected as LLMs are designed to produce plausible strings of text rather than compare your query to a model of the real world (GRAILE AI, 2023).
Finally, I wanted to examine the activities and assessments that ChatGPT produced for this course. As a broad list of ideas of what someone might pick from for their course, the list isn’t that bad. However, if you want to have the system produce something that’s a bit more of a “turn-key solution” (a popular marketing term prevalent in educational technology) the results aren’t satisfactory. At a high level, there is an alignment issue between course activities, assessments, course objectives, and course materials. In some instances, there’s no discernible alignment, and in other instances, it’s close enough to pass a quick inspection, but not sustained scrutiny. There also appear to be too many activities to engage with in the course, however, there is a lack of follow-through and some of the activities seem to only connect to some superficial level. For example, the design challenge group activity in Week 3 connects to some extent with the assessment due in Week 3 for the course design proposal, but as presented these two elements look to be separate, and as such create busy work for learners.
Looking back at this experiment, the process of producing something felt rather easy, and it wasn’t all that time-consuming. However, given the issues that lie underneath the surface, I return to the question “is this a resource-efficient way to do this?” (The Radical AI Podcast, 2023), and to that, my answer is no. The tool that is producing this synthetic text output enters the conversation of course creation without being accountable for what it produces (GRAILE AI, 2023). If I were creating this with one or more human co-designers and co-authors, their claims and ideas can be easily discussed and verified before they go into the product. An LLM does not have that capability. Synthetic media is hollow because it lacks a connection (The Radical AI Podcast, 2023), which brings me back to a statement I’ve heard many times about instructional design: It is the art and science of helping people learn what they want to (or need to) learn. Art and Science both require a heart, a mind, and - as hard as it is to quantify - a soul. LLMs simply produce plausible strings of text, whether true or not, without any communicative intent. Despite the hype that such tools reduce the costs and time by 70% in lesson writing (an unsubstantiated claim as of this writing), there are major issues associated with these tools, the most germane of which to this scenario, is the quality of the synthetic medium output. While LLMs are trained on a huge amount of data, size guarantees neither quality nor diversity (Bender et al., 2021), and we run the risk of perpetuating a kind of digital colonialism and a continued marginalizing of underrepresented people in our professions (Bender et al., 2021; Kwet, 2019). I think that more can be gained through a critical engagement between peers in syllabus creation as compared to something produced by an LLM. Synthetic text looks to the past to uncritically produce more of the same, whereas critical engagement with peers charts potential new directions as our respective fields evolve. If I needed a syllabus for a course that starts tomorrow, I’d be better off asking a colleague if I could use theirs.
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023). Generative AI has an intellectual property problem. Harvard Business Review Digital Articles, 1–10. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86(2), 369–385. https://doi.org/10.1007/s10734-022-00937-2
Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198). https://aclanthology.org/2020.acl-main.463/
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922
Brittain, B. (2023, August 21). AI-generated art cannot receive copyrights, US court says. Reuters. https://www.reuters.com/legal/ai-generated-art-cannot-receive-copyrights-us-court-says-2023-08-21/
Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8), 423–425. https://doi.org/10.1038/s42256-020-0219-9
GRAILE AI (2023, August 8). ChatGP-why: When, if ever, is synthetic text safe, appropriate, and desirable? [video]. YouTube. https://www.youtube.com/watch?v=qpE40jwMilU
Haven, W. D. (2023, April 6) - ChatGPT is going to change education, not destroy it. MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
IBSTPI (2021) Instructional designer competencies. The International Board of Standards for Training, Performance and Instruction https://ibstpi.org/competency-sets-services/instructional-designer-competencies/
ISTE (2017). ISTE standards: For educators. International Society for Technology in Education. https://iste.org/standards/educators
ISPI (n.d.). Certified performance technologist performance standards. International Society for Performance Improvement. https://ispi.org/page/CPTStandards
Klee, M. (2023, June 6) - She was falsely accused of cheating with AI — and she won’t be the last. Rolling Stone. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/
Kaleta, R., Garnham, C., & Aycock, A. (2005). Hybrid courses: Obstacles and solutions for faculty and students. The Annual Conference on Distance Teaching and Learning: 1-6.
Koutropoulos (2023a) - Synthetic syllabus ChatGPT log [Unpublished Document].
https://drive.google.com/file/d/1QeqctRvE4hYq8lzUIhwrBliw-LasHWax/view?usp=sharing
Koutropoulos (2023b) - Synthetic syllabus for 684 [Unpublished Document].
https://docs.google.com/document/d/1T36ylejpJm1NKzjeWea9TPm0_QUmfVk3JOzv6h6sn_A/edit#heading=h.jb9ucjfqnb3s
Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global
South. Race & Class, 60(4), 3-26. https://doi.org/10.1177/0306396818823172
McQuillan, D., Jarke, J., & Pargman, T. C. (2023). We are at an extreme point where we
have to go all in on what we really believe education should be about. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00433-5
Rowe, N. (2023, August 2) - ‘It’s destroyed me completely’: Kenyan moderators decry toll of
training of AI models. The Guardian. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai
The Radical AI Podcast (2023, March 1). The limitations of ChatGPT with Emily M. Bender
and Casey Fiesler [video]. YouTube. https://www.youtube.com/watch?v=gSRN_3pkTsc
* Corresponding author: a.koutropoulos@umb.edu