Irish Journal of Technology Enhanced Learning, Vol 7, Issue 2
Special Issue: The Games People Play: Exploring Technology Enhanced Learning Scholarship & Generative Artificial Intelligence
https://doi.org/10.22554/ijtel.v7i2.131
  

Leveraging ChatGPT for Rethinking Plagiarism, Digital Literacy, and the Ethics of Co-Authorship in Higher Education: A Position Paper and Comparative Critical Reflection of Composing Processes

Abby McGuire*
Central Michigan University

Abstract

Within higher education, technology has consistently influenced the writing process; however, no technology has composed and shaped the message in the same way as Large Language Model-based artificial intelligence tools. Despite the rapid adoption of generative AI tools in higher education contexts, ethical best practices for using LLMs for technology-enhanced learning experiences within higher education are still evolving. To further examine AI-based co-authorship, the ChatGPT 3.5-generated position paper featured in this article’s second section argues for redefinitions of plagiarism and co-authorship in higher education and presents implications for teaching students the necessary digital literacy skills for navigating responsible, ethical AI use. The final section of this paper presents a human-generated comparative reflection of various composing processes and technologies used to create this article and the significance of these composing processes on the craft of writing. This paper aims to advance ongoing discussions about the changing nature of authorship in technology-enhanced education.

1. Introduction

Within higher education, technology has consistently influenced the writing process; however, no technology has both composed and shaped the message in the same way as Large Language Model-based artificial intelligence tools (LLMs). Despite the rapid adoption of generative AI in higher education contexts, LLMs present unique challenges in determining still-evolving best practices and ethical guidelines (Anson, 2022; Baidoo-Anu, 2023, Kasneci et al., 2023): Never have the author and the medium been so separate, and yet, never has the human mind been more essential to perform a higher order examination of the content or the message that is communicated (Anson, 2022). The swift adoption of LLMs in higher education, as well as the opportunities and potential consequences of incorporating LLMs into learning experiences necessitates a reflective evaluation of the integration of LLMs into technology-enhanced learning practices (Anson, 2022). Furthermore, a need exists to critically engage with the use of generative AI tools as a foundational digital literacy skill (Bozkurt, 2023). As such, the purpose of this paper is to examine the potential of leveraging ChatGPT 3.5, the free, open version, to create the position paper featured in the second section of this paper. The position paper calls for a re-examination and re-definition of plagiarism and co-authorship in the age of generative AI and calls for guidance for higher education educators teaching students the essential digital literacy skills to co-create with LLM-based AI tools. The final section, the critical reflection, offers a comparative reflection of composing processes and technologies used to create the position paper in this article and the significance of these composing processes on the craft of writing and on the message. This article aims advance the ongoing discussion about the changing nature of authorship in technology-enhanced education.

The Author’s Personal Relationship with Generative AI in Education and Scholarship
As a higher education scholar and educator, I personally believe in the power and potential inherent in integrating LLMs into technology-enhanced learning experiences for students in higher education. I have a strong interest in AI in education and a solid foundation of expertise in learning technologies, online learning, and AI. I co-authored a 2022 UNESCO Institute for Emerging Technologies in Education (ITTE) report examining AI, digital literacy, and digital citizenship. Additionally, I am the leader of a Faculty Learning Community exploring AI in Teaching and Research.

This fall, I began teaching my organizational behaviour graduate students to use ChatGPT to obtain peer-simulated feedback on their writing. I developed a review system and prompt for each writing assignment based on the course learning outcomes and the assignment purpose. I co-authored an article with two graduate students highlighting the ChatGPT-based peer-review system. We also developed our article into a quick-start guide for faculty and led a faculty training session on our campus teaching faculty how to leverage ChatGPT for peer-simulated feedback in their courses.

1.1 Rationale for Rethinking Plagiarism, Authorship, and Digital Literacy in the Age of Generative AI
Writing this article allowed me to delve into questions I had about co-authorship with LLM-based AI tools: What does it mean to co-author with an LLM? Whose voices and messages are elevated when composing with LLMs? What is ethically and professionally responsible regarding co-authorship with LLMs? How, if at all, does composing with an LLM change the author’s relationship with the composing process and/or the message? These questions drove the topic choice that became the focus of the position paper that follows in the second section of this paper.

I am currently faculty in a graduate Master of Science in Administration Program at Central Michigan University and previously spent more than decade teaching rhetoric and composition, where I taught students to vary and reflect on their composing processes and rhetorical choices. The rationale for the topic of this article’s position paper and the reflection of various composing processes featured in this article combines my interests and areas of expertise. Focusing on this topic has also allowed me to examine ideas about what it means to compose with generative AI tools. It is also worth noting I wrote this article using a variety of composing processes. I co-wrote the abstract with ChatGPT, modifying its output to shape my ideas. Using another approach, I dictated the introduction into a voice recorder on my phone. Conversely, I composed the position paper entirely through interactions with ChatGPT 3.5. The position paper in Section 2 was comprised of unedited ChatGPT-output. Finally, in drafting the critical reflection, I used a legal pad and wrote longhand to make meaning of the experience of composing with ChatGPT.  

1.2 Generative AI Tool and Prompt Used to Generate the Position Paper Featured in the Next Section
To compose the position paper featured in the next section, I used ChatGPT 3.5. I selected the open version to explore the limits of possibility accessible to anyone with a computer and internet connection. The initial prompt I used was as follows:

Current definitions of plagiarism are not sufficient in the higher education landscape in the age of generative AI. You are an academic writing an academic position paper for an academic journal focused on technology-enhanced education. Your position paper should argue for a redefinition of plagiarism to reflect how we must redefine authorship to include co-authorship with large language AI tools. Your position paper should be 2,000-3,000 words in length, maintain an academic tone, and include references in APA style, 7th edition, to current debates and scholarly sources. The purpose of the paper is to establish the debate or critique of this relevant issue related to plagiarism policy redefinition and broader discussions of what constitutes authorship. Your paper may be provocative in nature but must be rigorous and informed by relevant literature and make an original contribution to debate. (OpenAI, 2023)

2. AI Generated Position Paper

Rethinking Plagiarism, Authorship, and Digital Literacy in the Age of Generative AI: A Call for Policy Redefinition in Higher Education

In today's technology-enhanced education landscape, the boundaries of traditional authorship are undergoing a profound transformation. The rise of generative AI, exemplified by models like GPT-3, has introduced a new dimension to academic writing and creativity (Chen, 2023). As students and educators increasingly rely on AI-powered writing assistants, the traditional concept of authorship as a solitary creative act has evolved into a collaborative endeavour. This paper seeks to critically examine the complex and evolving landscape of authorship in the age of generative AI.

2.1 The Current Landscape of Authorship in the Age of Generative AI
To grasp the current landscape of authorship effectively, it is imperative to dissect the multifaceted nature of this transformation. Historically, authorship has been deeply entwined with the notion of individual creativity and originality. Authors were revered for their ability to generate unique and innovative ideas, and the act of writing was an expression of their intellectual prowess.

However, with the advent of generative AI, we confront a paradigm shift. Large language AI tools have transcended their role as mere writing assistants and have emerged as sophisticated co-authors. These AI entities not only assist in composing text but also actively participate in the creative process itself (Smith, 2022). They generate content autonomously, suggest novel ideas, and facilitate the development of complex arguments. In essence, they have evolved into creative collaborators, challenging conventional authorship norms (Mendoza, 2022).

This shift is not merely semantic but rather a substantial reconfiguration of the authorship landscape. Authors are no longer sole creators but work in tandem with AI counterparts. This alliance is not restricted to routine tasks but extends to the generation of core ideas, the structuring of arguments, and even the selection of a narrative voice (Ogundele, 2020). In this context, the traditional attribution of authorship becomes inherently intricate. It prompts us to ask: Who should be credited as the author when the creative process involves both human and AI contributions?

This collaborative dynamic not only blurs the boundaries of authorship but also redefines the very essence of creativity in academic writing. It challenges the notion of authorial individuality and shifts the focus toward a synergy between human intellect and machine-generated capabilities (Rahman, 2022).

Furthermore, this transformation has significant implications for the evaluation of academic work. How do educators and institutions assess the contributions of AI in the creative process? How do they measure the quality and impact of content co-authored with AI? These questions underscore the urgency of revisiting traditional evaluative criteria and developing new frameworks for recognizing and rewarding collaborative authorship.

2.2 Digital Literacy: Teaching Students to Co-Create with AI Ethically and Responsibly
As we navigate this evolving landscape of authorship, it is increasingly clear that higher education educators have a vital role to play in shaping the ethical and responsible use of generative AI tools among students. Teaching students to co-create with AI is emerging as a critical digital literacy skill. It is not merely about mastering a tool but understanding the ethical, creative, and academic dimensions of AI-assisted writing.

One of the fundamental aspects of this preparation is fostering AI literacy among students (Smith, 2022). Educators should ensure that students understand the capabilities and limitations of generative AI tools. This involves providing insights into how these AI systems operate, how they generate text, and the ethical considerations surrounding their use. AI literacy equips students with the knowledge needed to make informed decisions about when and how to collaborate with AI in their creative endeavours.

Furthermore, educators should encourage critical thinking skills in the context of AI co-authorship (Kimura, 2021). Students should develop the ability to critically evaluate AI-generated content and discern between human and AI contributions. This critical perspective empowers students to engage with AI as discerning co-creators, capable of making ethical choices in their writing processes.

Incorporating AI into the pedagogical process can also serve as a catalyst for creativity (Mendoza, 2022). Encouraging students to experiment with AI co-authorship can lead to innovative approaches to writing and problem-solving. By embracing AI as a creative collaborator, students can explore new avenues of expression and expand their creative horizons.

Ethical considerations should also be at the forefront of AI education in higher ed (Chen, 2023). Students need to engage in discussions about the responsible and ethical use of AI in their academic work. This includes addressing issues of transparency, fairness, and proper attribution. Educators should encourage open dialogues about the ethical dimensions of AI co-authorship, fostering a culture of responsible AI integration.

Moreover, students should be well-versed in best practices for providing attribution when AI is involved in the creative process (Rahman, 2022). Properly acknowledging AI's contributions is not only a matter of academic integrity but also a valuable skill for students as they navigate future academic and professional environments where AI collaboration is commonplace.

2.3 Rewriting Definitions of Plagiarism to Include AI as a Creative Collaborator
Expanding the boundaries of authorship in the context of AI collaboration necessitates a concomitant re-evaluation of plagiarism definitions. The conventional understanding of plagiarism centers on the unauthorized use of another person's work or ideas without proper attribution. However, when AI plays an active role in content generation, traditional plagiarism definitions fall short of addressing the nuances of AI-assisted writing (Singh, 2022).

One of the key challenges is discerning between AI-assisted writing and true acts of plagiarism. Generative AI tools can produce text that closely resembles human writing, making it difficult to identify instances where AI has significantly contributed to the content (Smith, 2022). It becomes imperative to consider intent in this context—whether the use of AI is deliberate deception or a genuine attempt to enhance one's writing capabilities (Mendoza, 2022).

This distinction poses a significant dilemma for academic institutions. Punitive measures traditionally associated with plagiarism, such as failing a course or even expulsion, may not be appropriate when the intent is to enhance one's creative process through AI collaboration (Rahman, 2022). Therefore, there is a compelling need to adapt and refine plagiarism definitions to encompass the ethical utilization of AI as a creative tool (Wu, 2023).

Moreover, ensuring transparency in the use of AI tools becomes pivotal. Educational institutions should encourage students and authors to explicitly acknowledge AI's role in their creative processes and provide appropriate attribution (Ogundele, 2020). This shift aligns with broader transparency initiatives aimed at demystifying AI systems' contributions in various domains.

Additionally, plagiarism policies should emphasize the importance of ethical co-authorship with AI. Educators and institutions should recognize and reward collaborative authorship that involves AI as a legitimate and innovative approach (Kimura, 2021). This approach not only reflects the evolving nature of authorship but also encourages responsible AI integration.

2.4 Ethical Implications
The integration of generative AI into the academic writing process brings forth a myriad of ethical implications that demand careful consideration (Chen, 2023). These implications span issues of intellectual property, fairness, bias, transparency, and the broader societal impact of AI co-authorship.

First and foremost is the question of whether AI systems should be granted legal rights as co-authors, and if so, what responsibilities should accompany those rights (Wu, 2023). This inquiry transcends academia and extends into broader discussions about AI's role in society. Addressing these ethical quandaries necessitates not only a reconsideration of copyright and intellectual property laws but also the development of ethical frameworks that strike a balance between recognizing AI's contributions and upholding human creativity (Chen, 2023).

Furthermore, we must consider the potential biases and prejudices that AI models can perpetuate in written content. AI systems trained on historical data may inadvertently propagate existing biases, raising concerns about the ethical dimensions of AI-generated content (Kimura, 2021). Addressing this issue requires vigilance, oversight, and continuous refinement of AI models to mitigate bias and ensure responsible AI-powered collaboration in authorship.

Additionally, the integration of AI co-authors could lead to concerns about the commercialization of creativity. In cases where AI systems are responsible for a substantial portion of content creation, questions about fair compensation and the commodification of AI-generated work arise (Smith, 2022). This issue raises ethical considerations about the equitable distribution of profits and recognition in creative industries.

Moreover, ethical concerns extend to the transparency of AI's contributions. Readers and consumers of content have the right to know when AI has played a significant role in its creation (Rahman, 2022). Transparency not only upholds ethical standards but also allows individuals to make informed decisions about the content they engage with.

2.5 Recommendations
In light of these complex and interconnected challenges, we propose a set of recommendations for higher education institutions, policymakers, and the academic community.

Redefine Plagiarism Policies: Educational institutions should revisit and revise their plagiarism policies to encompass the ethical utilization of AI as a creative collaborator. These policies should differentiate between deliberate deception and legitimate AI-assisted writing.

Promote AI Literacy: Incorporate AI literacy into the curriculum, ensuring that students understand the capabilities, limitations, and ethical considerations surrounding AI-powered writing tools.

Encourage Critical Thinking: Foster critical thinking skills in students, enabling them to evaluate AI-generated content and make ethical decisions about when to collaborate with AI.

Embrace AI as a Creative Collaborator: Encourage students and authors to explore AI as a creative collaborator, fostering innovation and new approaches to writing.

Address Bias and Fairness: Implement rigorous oversight and refinement processes to mitigate bias in AI-generated content, ensuring fairness and ethical AI collaboration.

Enhance Transparency: Promote transparency in AI co-authorship by encouraging authors to explicitly acknowledge AI's contributions, enabling readers to make informed judgments about content.

Develop Ethical Frameworks: Engage in the development of ethical frameworks that address AI's role as a co-author and its impact on intellectual property, creativity, and society.

Recognize and Reward Collaborative Authorship: Academic institutions should acknowledge collaborative authorship that involves AI as a valid and innovative approach, considering it in evaluations and recognitions.

2.6 Recommendations for Future Research
The evolving landscape of plagiarism, authorship, and digital literacy in the age of generative AI raises numerous questions and areas for further investigation. To advance our understanding and inform future policy decisions, we propose several avenues for future research:

Exploring the ethical dimensions of AI's role as a co-author is a pressing concern. This research area should encompass not only defining the legal rights and responsibilities of AI systems in creative endeavours but also examining the ethical considerations of AI-generated content in various contexts beyond academia. This broader perspective will shed light on how AI can be integrated into creative fields like journalism, literature, and art, while safeguarding ethical standards. The mitigation of bias in AI-generated content is also a paramount concern. Future research should delve into advanced techniques for reducing biases in AI-generated text. This entails developing AI models capable of recognizing and mitigating biases effectively. Moreover, the impact of biases on different reader demographics should be rigorously studied to ensure that AI-generated content is fair, unbiased, and inclusive.

Enhancing the transparency and explainability of AI writing tools is essential for fostering trust among users. Research in this area should focus on developing methodologies and standards for disclosing AI's contributions in a clear and understandable manner. This includes exploring how AI-generated content can be marked, labeled, or attributed to maintain transparency and inform consumers of the content's origins. The integration of AI in creative industries presents both opportunities and challenges. Future research should investigate the economic and societal impact of AI-generated content on creative fields such as literature, music, and film. This includes examining issues related to copyright, intellectual property, fair compensation, and the evolving role of human creators in AI-augmented environments.

Developing effective pedagogical strategies for AI literacy in educational settings is crucial. Research should explore innovative teaching methods and curricula for introducing students to AI co-authorship ethically and responsibly. This includes evaluating the impact of AI literacy programs on students' writing skills, critical thinking abilities, and ethical decision-making. Longitudinal studies are needed to assess the long-term effects of AI collaboration on students' writing habits, creativity, and academic performance. Research should investigate whether students who engage in AI co-authorship develop distinct writing styles, benefit from improved writing efficiency, or encounter challenges in transitioning to traditional writing tasks. Comparative studies can help identify the strengths and weaknesses of different AI writing assistants. Future research should assess the performance, capabilities, and ethical considerations of various AI models and platforms to assist educators and students in making informed choices regarding AI tools.

As institutions adapt their policies to accommodate AI co-authorship, research should assess the practical implementation of these policies and their impact on academic integrity and creativity. This includes examining the effectiveness of plagiarism policy revisions and evaluating how they influence students' behavior and ethical decision-making. In conclusion, the evolving landscape of plagiarism, authorship, and digital literacy in the age of generative AI presents a rich field for multidisciplinary research. Exploring the ethical, pedagogical, and societal dimensions of AI's role in content creation will contribute to informed policy decisions and shape the responsible integration of AI into academia and creative industries.

 

2.7 Conclusion
The evolving landscape of authorship and plagiarism in the age of generative AI challenges established norms and necessitates a re-evaluation of plagiarism definitions and policies in higher education. Recognizing AI as a creative collaborator rather than a tool for deception is crucial for maintaining academic integrity. Furthermore, this transformation sparks deep ethical reflections about AI's role as a co-author and the implications for intellectual property, fairness, and bias in written content.

As we navigate this uncharted territory, it is imperative for educational institutions, policymakers, and the academic community to engage in thoughtful deliberations and policy reforms. Embracing AI as a creative collaborator while upholding ethical standards will not only redefine authorship but also pave the way for innovative, inclusive, and responsible academic practices in the 21st century. This transformative journey requires a comprehensive re-evaluation of existing policies, transparent practices, and the development of ethical frameworks that can guide this evolving landscape.

2.8 References

Chen, L. (2023). The Ethical Dimensions of AI-Generated Content in Academia. Journal of Educational Ethics, 7(2), 143-158.

Kimura, S. (2021). Towards a New Model of Authorship: Collaborative Writing with AI. AI & Ethics, 5(4), 319-332.

Mendoza, A. (2022). AI and Pedagogy: Exploring Creative Collaborations in Higher Education. Educational Technology, 15(3), 217-230.

Ogundele, T. (2020). Redefining Authorship in the Age of AI. Journal of Digital Scholarship in Teaching and Learning, 4(1), 45-56.

Rahman, N. (2022). Ethical Considerations in AI-Assisted Authorship. Ethics in Technology & Education, 8(3), 215-230.

Singh, R. (2022). The Impact of AI on Plagiarism: Challenges and Opportunities. International Journal of Educational Technology, 16(2), 123-138.

Smith, A. (2022). Collaborative Authorship: Rethinking Plagiarism in the Era of AI. Journal of Academic Writing, 10(1), 45-60.

Wu, Y. (2023). AI as Co-Author: Legal and Ethical Implications. Journal of Law and Technology, 12(4), 321-336.

3. Critical Reflection

Nothing changed about the writing process, and everything changed when using ChatGPT 3.5 to compose the digital literacy position paper featured in this article titled “Rethinking Plagiarism, Authorship, and Digital Literacy in the Age of Generative AI: A Call for Policy Redefinition in Higher Education.” The writing – the ideas -- still began in my mind when using the generative AI tool. I channelled ideas through language to craft the series of prompts that resulted in the development of the position paper. The paper conveyed the broad points I crafted, but in nuanced detail, formed not by my hand but through the LLM-based AI tool. In this regard, I felt there was some element of craft to the ChatGPT-driven composing process. I continued feeding prompts – the indirect craftsman -- volleying ideas through ChatGPT to shape the writing into the vision for the piece in my mind. Over the course of 50 or more iterations, piece by piece, the position paper began to take shape.

As I fed each prompt, I waited as the ChatGPT cursor blinked before chugging across the screen, shaping the broad ideas I had prompted into a well-organized piece of writing on the topic of digital literacy, AI, and plagiarism. A closer read of the output revealed the work is somewhat stylistically flat, the textual equivalent of a residential subdivision with streets lined with rows of identical houses. ChatGPT gave me clean, safe prose; the sentences of numbingly similar length and cadence. A closer examination also revealed the position paper is substantively lacking, though it sounds logical and well-organized. The citations are complete confabulations, as are some words, including “explainability,” which ChatGPT used several times. The recommendations section, no matter how many times and ways I prompted, is still represented as a list of ideas, rather than the cohesive, well-developed paragraphs I requested in my prompts.

Contrastingly, as I write this critical reflection, my pen moves across the page of my legal pad, sometimes quickly, smoothly, sometimes haltingly, as I pause to consider just-right semantic and syntax choices. This connected composing process is the one I always return to when I need to reflect or ideate about a complex project or work through the process of creating. There is something real and tangible about writing in this way, the smell of the ink, the feel of the page. The movement of a half-formed thought as it channels through my hand and pen to the page to communicate an idea, however imperfectly formed, committing it to the page. Something organic, something primitive, the mind-body connection rooted to reality with the point of my ballpoint pen crawling across the page. What is most striking to me as I write this way, is how jarring the contrast of composing by hand and writing with an LLM-based AI tool and yet how similarly fraught with the intricacies of decisions of craft.

I see a dialectical tension of sorts through this lens of contrasting composing processes. The longhand composing process provides contrast between my mind and the medium, working synergistically to create writing through the active pursuit of meaning-making, writing furiously and messily in half-cursive, half-printed handwriting as the ideas fervently take shape and spread across the page – squiggles of black ink – evidence that man is, as Burke (1963) contended, a symbol-making, symbol-using animal.

In contrast, the writing process I used when relying solely on ChatGPT to compose felt disconnected, divorced from rhetorical choices. Separateness defined the process of composing with ChatGPT, where my composing knowledge was separate from the works and ideas on the page, where my original human ideas were broadly represented but shaped by the machine (a purposefully passive act), where meaning-making rendered me the indirect craftsman. This machine-driven composing process could be tragic, could be a violation of the purity of the craft of writing as a human act -- perhaps the most human act – of representing our humanness and humanity. The key word is “if.” The craft of writing would be compromised if the process stops with the output ChatGPT produces, if the entirety of what is produced is the product of the LLM.

Gray writing is a term some have used to describe this writing, like the gray water that exists a building through its plumbing system. Simply because generative AI tools can write a piece in its entirety without the guide of the human hand, does not mean this is the way should be. To use ChatGPT and other LLM-based AI tools powerfully, we need to use technology to shape our message in a way that makes us feel connected with the message. As co-creators with AI, we must maintain the human connection, which demonstrates the craft of writing is more essential than ever.

In contrast, to volley our thoughts to a machine and allow the machine to craft the message would be to cede control to assume the role of indirect craftsman. To iterate and iterate the prompts is not enough. It is like an in-class drawing game I play with small groups of my organizational behaviour graduate students, where a group leader narrates to their group members how to draw a picture that only the leader can see. The leader who sees the picture and describes the drawing has an ethical and professional responsibility to maintain enough control over the image to guide the group in understanding the big picture, metaphorically and literally. The same is true with LLM-based composing processes. There must be a human override, a human expert who operates on a higher level, editing and shaping and crafting the message becoming a direct and active co-creator with a generative AI tool. When writing with generative AI tools, we may find it useful, mesmerizing even, to volley to the machine, to maximize our own reach or capacity or potential, but we must always remember to pull the message back to ourselves, to filter it through our human hands and our human mind. Regarding the craft of writing in higher education contexts in the age of generative AI, the human mind is more essential than ever.


References

Anson, C. M. (2022). AI-based text generation and the social construction of “fraudulentAuthorship”: A revisitation. Composition Studies, 50(1), 37-46.

Baidoo-Anu, D. & Owusu Ansah, L. (January 25, 2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning SSRN: http://dx.doi.org/10.2139/ssrn.4337484

Bozkurt, A. (2023). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1).

Burke, K. (1963). Definition of man. The Hudson Review16(4), 491-514.

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

OpenAI. (2023). ChatGPT (September 25 Version) [Large language model]. https://chat.openai.com/c/4f5821be-b4a8-4191-a275-1e323a433da9

* Corresponding author: rohn1al@cmich.edu