Main Article Content
This article critically evaluates responses provided by ChatGPT 3.5 & 4, at three different time intervals, to questions posed about how the chatbots compose their answers, and the extent to which they report themselves as being a ‘trusted source.’ The purpose of this is to uncover key assumptions contained within ChatGPT and in so doing provide material which aids and promotes literacy on generative AI and more specifically on ChatGPT. The responses provided by ChatGPT are very rarely definitive and instead are predominantly nuanced. Such nuancing significantly underestimates the likelihood that its responses contain incorrect information and therefore imbues an over inflated sense of trust in the responses provided, especially to a novice user of ChatGPT. Further, in its answers it consistently offloads the blame for biased and incorrect answers on the training data it uses, rather than directly accepting responsibility for when it produces incorrect and/or biased answers.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to the Irish Journal of Technology Enhanced Learning retain the copyright of their article and at the same time agree to publish their articles under the terms of the Creative Commons CC-BY 4.0 License (http://creativecommons.org/licenses/by/4.0/) allowing third parties to copy and redistribute the material in any medium or format, and to remix, transform, and build upon the material, for any purpose, even commercially, under the condition that appropriate credit is given, that a link to the license is provided, and that you indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.