12/09/2024 | News release | Distributed by Public on 12/09/2024 09:20
By Dr. Chad Raymond, professor in the Department of Political Science and Department of Cultural, Environmental and Global Studies. This article is excerpted from Raymond's 2024 article in the Chronicle of Higher Education.
A global survey by the Digital Education Council found that 86% of university students now use AI in their studies. Notably, 80% of them said their university's integration of AI tools does not fully meet their expectations. With more than 75% of global knowledge workers using generative AI in the workplace, using this technology effectively and confidently is a skill students simply need to have.
For faculty struggling with how to deal with generative AI in the classroom, we can learn from how the field of mathematics responded to the introduction of the calculator 50 years ago.
Horrified at the thought of students never learning how to do long division with pencil and paper, some teachers banned calculators from their classrooms. Others argued that calculators helped reduce the amount of class time they spent teaching basic routines, so they could focus more on teaching reasoning, data interpretation and problem solving. It was unrealistic to demand that students do things by hand in the classroom when calculators had become common outside it. To these teachers, the world had changed, and math instruction needed to change with it. In the end, it did.
With generative AI, educators in the humanities and social sciences face that same technological predicament. Reading has been central to the study of philosophy, history and other fields for centuries, while writing has been the main vehicle for training students to become better thinkers and for assessing how good they are at it. Yet AI's reproduction of both activities is now indistinguishable from the real thing.
As data shows, many students are letting AI do this work for them. And for many humanities and social science faculty members, evaluating students has become an exercise in grading robots.
To respond to AI, some professors have resurrected assessment instruments from the pre-digital era, such as hand-written, proctored essay exams or in-person oral presentations, which come with their own complications. Others remain willfully ignorant, holding on to the delusion that their courses and teaching methods are so exciting that using AI would never enter their students' minds.
The solution for faculty members is twofold: We need to acknowledge that AI has rendered much of our teaching toolkit obsolete, and we need to adjust.
Minor tweaks to assignments and exams will do little to counter the fact that chatbots can distill thousands of pages of scholarship down to a few paragraphs in seconds. Instead, instructors need to move from the "what" approaches of the industrial era - students passively ingesting then regurgitating a prescribed body of information - to a "why" paradigm that turns students into builders of new knowledge through creative problem solving.
Dr. Chad Raymond with students.
We must shift to a project-based course design that taps students' natural curiosity while taking away the benefits of AI to achieve this goal. Here's what that might mean in practice:
While AI systems excel at retrieving, analyzing and synthesizing information, they are aren't great at forming and evaluating options to deal with complex social problems. That limitation becomes apparent to students if they try to delegate the entire problem-solving process to a machine.
Professors can use problem-oriented projects regardless of academic discipline. For example:
Projects with uncertain outcomes like this get students to invest emotionally and require original thinking, making AI use less attractive.
Another advantage projects have over other teaching methods is that they help make learning a team sport. In most organizations, people work in groups and collaboration is critical to success. Good ideas can arise from conversations, and people frequently learn tremendous amounts by teaching others. Working on a shared goal also fosters a sense of community. AI cannot yet replicate the benefits that come from these human interactions.
Finally, projects embed skill development within a meaningful context, making it easier for students to understand how they can transfer those skills to solving other problems. AI chatbots, in contrast, respond only to the immediate specifics of the prompts they are fed.
Emphasizing problem solving and project-oriented teaching isn't an argument for leaving students ignorant of basic principles while they pursue fruitless lines of questioning. An introductory economics student who doesn't understand the concept of interest rates will not be able to independently produce an authoritative analysis of Federal Reserve monetary policy by the end of the semester.
Faculty members should follow the lead of medical schools that use problem-based learning models in which students work to diagnose patients under the guidance of experienced physicians.
As the computer scientist Mitchel Resnick noted in 2017, educational systems are "stubbornly resistant" to change. Instruction in the humanities and social sciences has remained stuck in a print-age logic despite technological innovations like the internet, lack of student interest and public perceptions of irrelevance.
With AI, what used to work in the college classroom to some extent - telling students which questions deserved answers and what those answers were - no longer works at all. It's time we recognized the need to do things differently.