Quantcast

NYC Gazette

Sunday, October 6, 2024

Chatgpt Guidance For The Cuny Classroom

31

Classroom | Pexels by Pixabay

Classroom | Pexels by Pixabay

ChatGPT is the talk of the ivory tower. The AI language generator writes text that sounds uncannily human, including college-level essays. It also solves math problems; develops computer code; and offers up plays, poetry, and more. 

The possibilities for both learning and cheating are ripe. Here, Graduate Center faculty and Ph.D. students with expertise in interactive technology and teaching share their advice on how to teach in the age of AI chatbots. 

Distinguished Professor Cathy N. Davidson (English, Digital Humanities, Data Analysis and Visualization); founding director of The Futures Initiative; Senior Adviser on Transformation to CUNY Chancellor Félix V. Matos Rodríguez

ChatGPT may be delivering the wake-up call that higher education should have received on April 22, 1993. That’s the day that computer scientists at the National Center for Supercomputing Applications made the Mosaic 1.0 browser free and available to the public, basically creating the web as we know it where anyone with an internet connection can publish their ideas — without an editor, without needing credentials or authority, without a need for citation, and even with anonymous information that might be entirely inaccurate. Out with the file cards and library visits and in with Wikipedia. (Remember many colleges and universities “banned” Wikipedia from the classroom in around 2007?) This perspective is important.

ChatGPT makes that astonishing way of knowing, communicating, and lying online easier than ever. It also makes possible an extraordinary educational exercise — one that extends many of the exercises in The New College Classroom (co-authored with Graduate Center alumna Christina Katopodis (Ph.D. ’21, English)). 

I’m not teaching this term but, if I were, to either undergraduates or graduates in courses where they would normally be writing an essay, I would have them work on the same general topic and come up with their own prompts to see what essay ChatGPT would generate for them. I’d then have them work in small groups to compare the essays that their prompts generated. 

They might, for example, use coded ideological words in the prompts and see how those influenced the essays. I’d also have them double check the references for accuracy and inclusiveness. For example, did ChatGPT pick up a scholar’s entire argument or just what fit the prompt? 

And I’d have them think about all the things ChatGPT left out and then write a brief essay analyzing, with critical acumen, those omissions, biases, and misinterpretations. I’d also have them analyze what things they admired in the ChatGPT essay, or whether they admired someone else’s essay more than their own.

There’s much to learn by writing along with — rather than banning — new technologies, whether it’s Wikipedia in 2007 or ChatGPT in 2023.

Roderick Hurley, Ph.D. Candidate (Psychology: Critical Social/Personality Psychology training area); Graduate Fellow and Communications Director, The Futures Initiative; Adjunct Lecturer, College of Staten Island.

While technology facilitates progress, we know from history that this progress may not benefit all members of society to the same degree and can have disparate long-term impacts. Furthermore, what is seen as progress in one light may be clearly visible as a tool for deepening inequalities and widening gaps in another. Tools like ChatGPT need to be discussed in classrooms and researched in academia because they imply exciting possibilities for learning, but they also present huge challenges where academic integrity and education equity are concerned.

Knowing that some students still don’t own a computer or have 24/7 internet access, I’m worried about the potential for the use of AI technology in education to put underprivileged students at a greater disadvantage. I have concerns about access. Will individuals have to pay for access after the testing phase? When servers are busy who gets priority access? I also have questions about content. Who determines the data used to train the model? A warning on the Chat GPT interface indicates that the AI may generate “incorrect information, harmful instructions or biased content.”

Right now, rather than outlawing it and before embracing it, our focus should be on determining how to incorporate and coexist with this technology in a way that both facilitates learning and helps increase equity in education. I’m not sure what that looks like, but I definitely plan to discuss it with my students, and I hope we can begin to figure it out together.

Luke Waltzer, Director of the Graduate Center’s Teaching & Learning Center

Some faculty see ChatGPT as an exciting window into a future where vast amounts of information can be requested and quickly presented in customizable and adaptable ways, and where students can get immediate feedback on various forms of writing, including code and drafts of prose. 

Conversely many faculty have expressed concern about the ethical and epistemological implications of ChatGPT for the future of writing, thinking, and teaching. Some have advocated for shifting assessments and assignments to pen and paper, oral, or multi-modal formats that make it impossible for students to submit AI-generated text. There are even new tools being marketed to institutions that promise to detect whether a piece of writing has been generated by ChatGPT.

We should avoid allowing technology to draw us into an adversarial relationship with our students and focus our efforts on designing authentic and purposeful learning experiences and assessments. ChatGPT is powerful, but while the prose it produces is grammatically correct, it is bland and analytically weak. Its outputs lack the voice and specificity that distinguish human writing. If faculty are given adequate time to get to know their students as writers and as thinkers, they'll be able to determine quite easily if work students submit is their own.

The emergence of ChatGPT gives us an opportunity to think about what AI is: increasingly sophisticated mathematical predictors of sequences of words, with no intent, meaning, or native criticality. It's also important to remember that the tool reproduces and reinscribes the biases of the data upon which it has been trained. 

There may still be moments where such a tool can act as a resource for students navigating our courses. We should accept that if it's useful and free, students are going to use it. Faculty would be wise to explore how it interacts with their assignments, but we should continue to prioritize making sure that the work we ask students to do emerges organically from their experiences in our classes.

Adashima Oyo (Ph.D. ’22, Social Welfare); Executive Director of The Futures Initiative; Adjunct Lecturer at Brooklyn College and New York University

Plagiarism has been around for a long time, and I don’t suspect it’s going anywhere. Regardless of whether college instructors are successful or not at preventing their students from using ChatGPT, chatbots or whatever, a larger question looms: How do we empower students and remind them that their own original ideas and work are always superior to an artificially generated idea (or 10-page paper)? 

This is especially important for first-generation college students and students of color at all degree levels who are still trying to determine if they belong. That said, it’s also dangerous waters to question if a great paper or assignment was completed using ChatGPT. 

Some Black students and other students of color could rightfully perceive that as an academic microaggression. It’s equivalent to: “You speak so well [for a Black person].” One plus side to the recent ChatGPT talks that are tormenting academics is that instructors are being forced to adapt. That’s a good thing! Teaching is not static, it’s dynamic and should always be evolving.

Professor Matthew K. Gold (English, Digital Humanities, Data Analysis and Visualization); executive officer of the master’s programs in Digital Humanities and Data Analysis and Visualization; and director of GC Digital Initiatives

While ChatGPT is new, the phenomenon it represents — a disruptive technology raising fears of automation and cheating — is anything but. Before ChatGPT, there was Wikipedia; before Wikipedia, TV; before TV, the camera; before the camera, the printing press; and on and on, all the way back to the written alphabet supplanting oral knowledge traditions. At each step, these new technologies made people worry that the artisan work of the human hand and mind would be lost, that human creativity, authenticity, authority, and ingenuity would be subsumed and overcome by mechanical form.

Needless to say, that hasn’t happened. The camera did not push the hand-drawn sketch into oblivion; TV did not presage the end of writing and reading. Instead, we’ve learned to live with, work with, and manipulate each of these technologies, and to put them in their proper places. And the same will happen with ChatGPT and other artificial intelligence technologies. 

No, instructors don’t need to completely abandon everything they’ve been doing in the classroom now that ChatGPT is in the world. But, yes, they can educate themselves and their students about the nature of ChatGPT, and they might benefit from confronting this technology head-on in the classroom — studying it with their students, designing assignments that ask students to reflect on it explicitly, and playing with it together to explore its limitations. Imagine an assignment that asked students to write a short original paper, feed the paper prompt through ChatGPT, and then compare and analyze the two resulting compositions.

As with so many new technological developments, it would behoove us to ask probing questions of ChatGPT, to reflect on its form, and to ponder its weaknesses and gaps. How was it trained? What does it assume? What are its limitations? Who wrote its code? From what knowledge models does it emerge? How does the algorithm’s understanding of “style” differ from our own? These are questions to think with. And if we can respond to new technologies with new questions, we may find that it is in the questions themselves that we discover some of the many things that differentiate human from machine.

Original source can be found here.

MORE NEWS