An AI Did Not Write This Post, But It Could Have
Let us start by setting the scene… It is the year 2030, I mean 2022, and the discussion comments in the Brightspace classroom and the essays you are evaluating have been written by a machine. Would you have noticed the difference between a text written by an AI (Artificial Intelligence) and one written by a human? - Yes, it is entirely possible today that some of your students are using an AI tool to help them write their assignments.
GPT-3 is a Generative Pre-Trained Transformer created by OpenAI, an AI research and deployment company based in San Francisco. Since 2020, this tool has been open to the public to create text, and images, translate texts into different languages, and create new codes. In June of this year, it started to learn to play Minecraft with Video PreTraining (Baker, 2022). While this technology is new, the creators of GPT-3 have made significant improvements to the point where you can now buy books sold on Amazon written by an AI.
If GPT-3 is open to the public, and anyone can use the tool without coding knowledge and experience, what might we expect the impact of this technology to be in higher education?
Let us start with some of the strengths and weaknesses of GPT-3, according to Eaton et al. (2021):
Strengths:
GPT-3 is open and accessible
It takes just a few minutes to generate a written work from prompts generated by a human
GPT-3 produces material that is largely grammatically correct
It creates new content every time you submit a prompt
Weaknesses:
The content generated needs to be checked since it might make up facts and stories
It can create content that is not concise
It can contradict itself
It can present logical errors
So far, Turnitin cannot identify that an assignment has been written by GPT-3 (we could consider this one as a strength as well depending on who is using it)
If you look for journal articles in Google Scholar, some of the most common topics that appear when searching “GPT-3” and “Higher Education” are: academic integrity, how to use AI to support the evaluation of academic work, ethical considerations when using artificial intelligence, and rethinking teaching and assessments.
In addition to students using GPT-3, it might also be a tool for faculty. For example, Moore et al. (2022) assessed 143 students’ short answers in online Chemistry courses using a GPT-3 evaluation model. They found that the GPT-3 evaluation model agreed 40% with the assessment of a human when evaluating the low or high quality of the responses. They also assessed if GPT-3 could recognize Bloom’s Taxonomy level of the answers. GPT-3 matched 38% of the assessment done by a human when evaluating responses at the lowest level of Bloom’s Taxonomy. However, GPT-3 overestimated responses at the two highest levels of Bloom’s Taxonomy compared to the level assessed by humans. The authors conclude that GPT-3 can be improved by providing more robust datasets, but for now, it can help instructors with a first pass when evaluating student responses.
In the news, you can read articles about how AI might “kill college writing” (Schatten, 2022). Alternatively, you can read about how higher education will change because of the use of AI and, more specifically, GPT-3 (Inside Higher Ed, 2022). Or, you might be interested in reading about the experience of a group of faculty members who submitted an academic paper with GPT-3 being the first author (Thunström, 2022).
If GPT-3 in higher education evolves similarly to how we have seen calculators become a commonly-accepted tool we expect students to use, how do we design courses in a world of AI?
Carvalho et al. (2022) argue that educators and learners need to work together to develop learning experiences in an AI world. As we can see with the case of GPT-3, we should be questioning what skills are intrinsically human if an AI can also create content and solve problems. To start, the authors suggest that we begin at the philosophical and pedagogical level by defining the problem space of design for learning and considering the values of individuals in human development by using the Value Creation Framework created by Wenger, Trayner, and De Laat (2011). At the pedagogical strategy and tactics level, the authors suggest using futures thinking methods to help educators and learners envision “probable, plausible, possible, and preferable futures” (Carvalho, 2022, p. 4). In their conclusion, the authors urge educators to design learning experiences where educators and students interact; we have student-to-student interactions, and we also consider student and educator interactions with AI.
I would like to close this post with these questions:
Have you explored using any of the AI platforms available today? I would suggest starting with www.copy.ai.
If you use essays and discussion boards in your courses, how do you know that an AI did not write some of the responses?
If a response was written by an AI and edited by a student, who owns that content?
How should plagiarism policies change today and ten years from now?
Would you use GPT-3 to help you write journal articles?
How would you incorporate futures thinking into the design of your courses?
What ethical and legal questions should we consider related to GPT-3?
Note: This post was not generated by an AI, although I wonder if the co-creation of the post with an AI would have developed a more creative product.
Acknowledgment: I would like to thank Jon Harbor, Jennifer Harrison, and Dennis Strouble for their suggestions and recommendations in writing this blog.
Originally published on: https://www.linkedin.com/pulse/ai-did-write-post-could-have-maricel-lawrence/?trackingId=3CErsWnLSwLe2im7FX93sg%3D%3D