Irish Journal of Technology Enhanced Learning , Vol 7, Issue 2
Special Issue: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence
https://doi.org/10.22554/ijtel.v7i2.155  

Reflections on a Collective Creative Experiment with GenAI: Exploring the Boundaries of What is Possible

Leigh Graves Wolf*1, Tom Farrelly 2, Orna Farrell 3, Fiona Concannon 4

1University College Dublin
2 Munster Technological University
3Dublin City University
4 University of Galway

Introduction

We would like to start this editorial with sincere gratitude. In putting out a call with such a tight turnaround we were acutely aware of the pressure that we were putting on the contributors, the reviewers and ourselves as editors. However, we were equally cognisant of the rapidly changing nature of the world of Generative Artificial Intelligence (GenAI) and its impact on the world of education. Thus, we wanted to publish a timely issue by compressing the whole process from the call, to review, to copyediting and finally to publication into a timeframe of approximately 11 weeks. (Ultimately from call to publication the process took 81 days.) First, thank you to all who took the time to submit manuscripts for consideration. A good portion of academic labour is invisible and unrecognised and we want to acknowledge and thank you for the time you dedicated to creating submissions. Second, thank you to the reviewers who turned things around very quickly in a professional and supportive manner in order to meet our ambitious timetable. Finally, thank you to the authors who appear in this issue and who worked quickly to turn around revisions and edits.  As an editorial team, we learned a great deal about our own procedures, processes and patterns which we will carry forward to continue to improve the Irish Journal of Technology Enhanced Learning.

In the issue that follows, we hope to provide a snapshot of a moment in time. When ChatGPT was released in November 2022 it created ripples in education that had not been seen in quite some time. Countless articles about it being the downfall of education (Devlin, 2023; Chomsky, 2023) to the solution (Heaven, 2023; Seetharaman, 2023) and all things in-between (Leaver & Srdarov, 2023) flash across our screens daily. Places of education are scrambling to create policies and there has been a swift reaction to GenAI at national, European, and global level.  In Ireland, the Quality and Qualification Agency (QQI) issued broad advice for tertiary education providers on GenAI in the context of assessment and academic integrity and reworking assessment strategies (National Academic Integrity Network, 2023). In Europe, The European Network for Academic Integrity (ENAI) published very useful recommendations on the Ethical use of Artificial Intelligence in Education in May (Foltynek et al., 2023). At the global level, UNESCO (2023) published a simple guide for educators called ChatGPT and Artificial Intelligence in higher education: Quick start guide in April. In November, Australia produced a national framework for the use of GenAI in schools (Commonwealth of Australia, 2023.) One clear throughline has been the need for faculty to increase their digital literacy and understanding of GenAI (Laupichler et al., 2022; Farrelly & Baker, 2023; Southworth et al., 2023). This was the driving force for this special issue. As a journal, we wanted to create a safe, open and scholarly platform for engaging with GenAI. The hope is that this issue can serve as a mentor text for discussion and experimentation.

Conundrums

The prefix ‘post’ is intended to signify that something comes after an event or an era. In some cases, we are not always sure when one era has finished and another era has begun. However, in the case of GenAI, we arguably do have a defining moment. While AI in various forms has existed for many years, we would suggest that the launch of ChatGPT on the 30th of November, 2022 was one such era-ending/beginning moment. In the 12 months since then, the subsequent explosion in the range and functionality of large language models (LLMs) has quite literally been a game changer for educators, students and educational institutions. This is not to imply that we embarked on this special issue lightly nor without acknowledgement of several issues with GenAI such as hallucinations, bias and impact on the climate to name just three (Rich & Gureckis, 2019; Briganti, 2023; Khowaja et al., 2023).   

Professor Sarah Eaton (2023) suggests that we have entered a ‘postplagiarism’ age where hybrid human-AI generated writing will become standard. Whether we get to a situation where this level of hybrid writing will become the norm is hard to say, but we certainly need to acknowledge that our previous conceptions of what constitutes academic outputs need to take into account a new reality. We are aware that some journals have explicitly stated that they will not accept GenAI content as a co-author (e.g. Thorpe, 2023). It is not our intention in this editorial to debate such a stance nor are we committing the journal’s future editorial policy at this point. It is important to note that authors in this issue explicitly did not list GenAI as a co-author on papers, but rather, were transparent and critical regarding its use in context. Suffice it to say that the intention for this special issue was simply to provide a forum for exploration, exposition and reflection on the GenAI academic writing process. 

The ethical conundrums associated with GenAI were also at the forefront of our minds as well. Many students and faculty alike refuse to engage with GenAI, but, how then, can one become familiar with the literacies involved in AI like prompt engineering, hallucinations, and the like that will be necessary to be informed citizens? Questions like where and how is GenAI generating content (Koebler, 2023) should always be at the forefront. The authors in this issue have been very transparent about their processes and prompts and we hope that the articles in this issue serve as starting points for conversations while also explicitly remaining cognisant of the very serious ethical dilemmas at play. 

Subjectification and creative experimentation

A key rationale behind this edition was that of creative experimentation. One of the threefold purposes of education, according to Biesta (2020) is that of subjectification, or how education impacts on the student either “by enhancing or by restricting capacities and capabilities” (p. 92). We are reminded of our freedom and agency when faced with new technology, to decide what to do with it and what actions to take. Our special issue was intended as an opportunity to give ourselves time to encounter this freedom, to take this subjectification as one of the “beautiful risks of education” (Biesta, 2020, p. 100); to approach the conundrum as subjects, beyond learning what responses a prompt provokes to qualify our knowledge, but to bring this opportunity to attend to other domains of purpose in our sensemaking.  

A core part of our humanity is our curiosity. We learn from a young age through playful encounters (Broadhead & Burt, 2012, Sawyer, 2006). Within the field of educational technology, since its inception, digital pedagogies have been connected to tinkering, making, sociocultural explorations and participatory networks. Some recent commentary around the use of genAI has been that students have more enthusiastically embraced using genAI for educational purposes (Chan & Lee, 2023). Therefore, this special issue was about giving ourselves good cause to experiment and play, to better understand how GenAI works, to share judgements of the quality of the outputs and what cautions to have. In turn, this creative experimentation supports us not only in considering how our own scholarship is being affected, but it enables us to be better placed to discuss expectations and give advice on how GenAI supports or detracts from learning in our disciplinary areas, with our students. Legitimising this experimentation through a peer-reviewed journal served as a forum for this intellectual exchange, to share and strengthen our emergent understandings and critical reflections, and to present this process, along with the vulnerability of indefinite answers, in a rapidly changing technological landscape, for all to see.

What does this mean for journals and technology enhanced learning?

As editors we wrestled with the boundaries of GenAI; asking ourselves questions like: should we use it for peer review? Should we use it to write this editorial? Ultimately, we decided not to venture into this territory. We did attempt to create a peer review checklist with ChatGPT 3.5, but it was not eminently helpful.  (Was that because we did not prompt it well? Because we were using a “free” version?) If we were to upload accepted papers to an LLM, we would then also need to come up with a transparent process and request permission from authors to submit their work. Given the tight turnaround time, and complexities involved, GenAI was not a timesaving tool in this instance.  

In learning from the contributors to this issue, and had we had more time, we feel there could have been ethical ways to potentially engage AI in the production process. For example, if we had access to a “closed” in-house instance of ChatGPT - could we have uploaded all of the manuscripts and asked it to summarise, highlight, create patterns? Perhaps. There is so much talk about the worry (or excitement) over GenAI being used to create content.  However, with the difficulty and challenges associated with peer-review and free labour given to the review and production process, will journals be using GenAI in the backend production of scholarship in the future, highly likely. (Swaak, 2023; Michigan Institute for Data Science, 2023; Peres et al., 2023).

Special Issue Summary

Despite being the Irish Journal of Technology Enhanced Learning, we are delighted to present perspectives from eight different countries. In the spirit of open learning, it was important to represent a range of novice to nuanced uses of GenAI in this special issue. Kaplan-Rakowski et al. (2023) have found that faculty awareness, understanding and integration of GenAI is on a spectrum, ranging from terrified to confident application, and we hope that range is represented in this issue. Authors represent a wide range of the educational spectrum (from secondary, to higher and beyond) and an even wider range of academic disciplines. 

In the issue that follows, we present five position papers, thirteen short reports and two book reviews. The issue is arranged simply by article type, then alphabetically. (Upon reflection, could this have been a place where AI could have cleverly helped see patterns in the manuscripts in a shorter amount of time?) 

In the collection of short reports, all of the AI generated content was produced by ChatGPT (with the majority using ChatGPT 3.5 and one author using ChatGPT4.) The position papers were also predominantly authored with ChatGPT however you will see three experiments with Perplexity and another experiment which combined AI tools. The book reviews were interestingly both facilitated using Claude. We were taken by the wide variation of prompts and engagement with GenAI and realise we may have been a bit naive ourselves in structuring the call. We thank all of the authors for their creative interpretations and our collective digital literacies will increase as a result of engaging with your processes.  

Each paper follows the same structure, Abstract, Introduction, AI Generated Content, Critical Reflection and References.  While each section is worthy of scrutiny we want to draw attention to the sometimes overlooked reference sections as they contain a fascinating collection of scholarship that serves as the foundation for much of the thinking presented. They are a treasure trove of “old and new” and will lead you down those joyous rabbit tracks we experience as consumers of academic writing. 

Parting Words 

To conclude, let us return back to the questions posed in the call for papers: 

What does AI really know about technology enhanced learning? 
What happens when you go “all in” with AI? 
What does engaging in this process say not only about our discipline, but, our humanity and identity as scholars?  

It is clear to us, for now, that we are not going to be replaced by GenAI. However, it is also clear that it is not going away; it “knows” a few things and is not something that we can ignore.  We must critically engage, either with the technology itself and with each other to determine how this tool will shape not only the educational landscape but society at large. We must not rely on old paradigms, but as Heath et al. (2023) suggest, seek collective, critical and ecological ways through this moment in time. We are no strangers to this challenge (Waters, 2023). We have done this work before and it’s critical now more than ever to use the past to shape the future and for us to remain steadfast in imagining just, fair and hopeful ways to learn with AI (Pechenkina, 2023).

How individuals and institutions are responding to this changing environment varies from outright repudiation to wholesale embracement, and everything in between. On the one hand, you have researchers and practitioners who have creatively engaged with GenAI head-on, experimenting, playing and seeking to understand the impact of this new technology on education. On the other hand, you have those who seek to ban it outright, refusing to engage with it. Our field has a long history of technology fads and hype cycles and we must continue to interrogate any new educational technology tool with a critical mindset (Orben, 2022, Weller, 2020, Watters 2021).  Engaging in this reflective process shows that while technology enhanced learning as a field is flexible, fast-moving and responsive it is not uncritical. 

The reality is that the majority are somewhere in between these two poles; they are, to varying degrees: intrigued, perplexed, excited, unsure and nervous about what GenAI means for the future of education and indeed society.  Arthur C. Clarke’s (1962) second law proposes that “The only way of discovering the limits of the possible is to venture a little way past them into the impossible” (pp. 20-21). We certainly do not go as far as to claim that this special issue offers a view of the impossible, but we do hope that we have at least pushed the boundaries of the possible a few steps forward. 

References

Biesta, G. (2020). Risking ourselves in education: Qualification, socialization, and subjectification revisited. Educational Theory, 70(1), 89-104. https://doi.org/10.1111/edth.12411 

Briganti, G. (2023). How ChatGPT works: A mini review. European Archives of 
Oto-Rhino-Laryngology. https://doi.org/10.1007/s00405-023-08337-7 

Broadhead, P., & Burt, A. (2012). Understanding young children's learning through play: Building playful pedagogies. Routledge.

Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 60–23. https://doi.org/10.1186/s40561-023-00269-3 

Chomsky, N. (2023, March 8). Opinion: The false promise of ChatGPT. New York Times. 
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html 

Clarke, C. A. (1962) Profiles of the future. An enquiry into the limits of the possible. Harper & Row. https://archive.org/details/profilesoffuture0000clar 

Commonwealth of Australia. (2023) Australian framework for generative artificial intelligence (AI) in schools. Department of Education. https://www.education.gov.au/schooling/resources/australian-framework-generative-artificial-intelligence-ai-schools 

Devlin, H. (2023, July 7). AI likely to spell end of traditional school classroom, leading expert says. The Guardian. https://www.theguardian.com/technology/2023/jul/07/ai-likely-to-spell-end-of-traditional-school-classroom-leading-expert-says 

Eaton, S. (2023, February 25). 6 Tenets of postplagiarism: Writing in the age of artificial intelligence. Learning, Teaching and Leadership: A blog for educators, researchers and other thinkers by Sarah Elaine Eaton, Ph.D. https://drsaraheaton.wordpress.com/2023/02/25/6-tenets-of-postplagiarism-writing-in-the-age-of-artificial-intelligence/

Farrelly, T. & Baker, N. (2023) Generative artificial intelligence: Implications and 
considerations for higher education practice. Education Sciences, 13(11), 1109; https://doi.org/10.3390/educsci13111109

Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). ENAI recommendations on the ethical use of artificial intelligence in education. International Journal for Educational Integrity, 19(1), 12-4. https://doi.org/10.1007/s40979-023-00133-4 

Heath, M. K., Gleason, B., Mehta, R., & Hall, T. (2023). More than knowing: Toward collective, critical, and ecological approaches in educational technology research. Educational Technology Research and Development, https://doi.org/10.1007/s11423-023-10242-z

Heaven, W. D. (2023, April 6). ChatGPT is going to change education, not destroy it. MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/ 

Kaplan-Rakowski, R., Grotewold, K., Hartwick, P. & Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research, 34(2), 313-338. https://www.learntechlib.org/primary/p/222363/ 

Khowaja, S. A., Khuwaja, P., & Dev, K. (2023). ChatGPT needs SPADE (sustainability, PrivAcy, digital divide, and ethics) evaluation: A review. ArXiv. https://doi.org/10.48550/arxiv.2305.03123 

Koebler, J. (2023, November 29). Google researchers’ attack prompts ChatGPT to reveal its training. 404 Media. https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data 

Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence, 3, 100101. https://doi.org/10.1016/j.caeai.2022.100101 

Leaver, T., & Srdarov, S. (2023). ChatGPT isn’t magic: The hype and hypocrisy of generative artificial intelligence (AI) rhetoric. M/C Journal, 26(5). https://doi.org/10.5204/mcj.3004 

Michigan Institute for Data Science. (2023, November 29). Using generative AI for scientific research: A quick user’s guide. https://midas.umich.edu/generative-ai-user-guide 

National Academic Integrity Network. (2023, August). Generative artificial intelligence: guidelines for educators. Quality & Qualifications Ireland. https://www.qqi.ie/sites/default/files/2023-09/NAIN%20Generative%20AI%20Guidelines%20for%20Educators%202023.pdf 

Orben, A. (2020). The sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143-1157. https://doi.org/10.1177/1745691620919372 

Pechenkina, E. (2023) Artificial intelligence for good? Challenges and possibilities of AI in higher education from a data justice perspective. In L. Czerniewicz & C. Cronin (Eds.) Higher education for good: Teaching and learning futures (pp. 239-266). Open Book Publishers. https://doi.org/10.11647/OBP.0363.09  

Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing, 40(2), 269–275. https://doi.org/10.1016/j.ijresmar.2023.03.001 

Rich, A. S., & Gureckis, T. M. (2019). Lessons for artificial intelligence from the study of natural stupidity. Nature Machine Intelligence, 1(4), 174–180. https://doi.org/10.1038/s42256-019-0038-z 

Sawyer, R. K. (2006). Explaining creativity: The science of human innovation. Oxford University Press.

Seetharaman, R. (2023). Revolutionizing medical education: Can ChatGPT boost subjective learning and expression? Journal of Medical Systems, 47(1), 61–61. https://doi.org/10.1007/s10916-023-01957-w

Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education. Artificial Intelligence, 4, 100127-. https://doi.org/10.1016/j.caeai.2023.100127 

Swaak, T. (2023, September 27). ‘We’re all using it’: Publishing decisions are increasingly aided by AI. That’s not always obvious. The Chronicle of Higher Education. https://www.chronicle.com/article/were-all-using-it-publishing-decisions-are-increasingly-aided-by-ai-thats-not-always-obvious 

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science (American Association for 
the Advancement of Science), 379(6630), 313–313. https://doi.org/10.1126/science.adg7879 

Watters, A. (2021). Teaching machines. The MIT Press.

Weller, M., (2020). Twenty-Five years of edtech. Athabasca University Press. 
https://doi.org/10.15215/aupress/9781771993050.01 

Ka Yuk Chan, C., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? Smart Learning Environments. 10, 60.  https://doi.org/10.1186/s40561-023-00269-3 


* Corresponding author: leigh.wolf@ucd.ie