The views expressed by contributors are their own and not the view of The Hill

Testing students in the age of ChatGPT: Asking the tough question

How should universities and other institutions of learning adjust their testing policies in response to the emergence and widespread adoption of ChatGPT, the artificial intelligence chatbot? This and related questions are fiercely debated across academic institutions. ChatGPT has displayed a considerable performance on various academic test assignments, ranging from medical exams to software development and from literature to law.

In the business school community, I recently published a report how ChatGPT would have received a B to B- grade in my Operations Management MBA class at Wharton, and I have heard from several colleagues that they could replicate these results in other business courses.

So, how should business schools and the academic community at large adjust their testing protocols? Should we ban the new technology? Or, should we embrace it?

As a business school professor and the chair of a large academic department, I appreciate the need to have clear rules and exam policies. However, I feel that we are answering the wrong question. Instead, I propose, we should ask ourselves the deeper question: Why do we test our students in the first place?

Few people, if any, enjoy tests. Tests create stress for the students, and work for the faculty. So, why do we test? In my view, tests have three important functions: skill certification, feedback, and student engagement.

Consider skill certification first. Medicine, law, accounting, and many other professions require a certification before one is legally allowed to practice in the profession, just like drivers have to pass a driver’s license exam before they can legally operate a vehicle. Similarly, AP exams or college courses certify the mastery of specific skills.

When it comes to skill certification, it should be easy to agree that we need to ban or at least regulate the use of technology.

Just like we would not trust a doctor who asked a friend or family member to take the certification exam in lieu of him or her, we would — and should — be reluctant to trust a future physician who only passed the exam thanks to an AI-bot.

Second, we use tests to get feedback of what students have learned so that we can pace our classes to make sure we teach according to the rate with which our students learn. At times, we might also have to customize the pace of the class for students who are either falling behind or are already way ahead of their peers. For these types of tests, the use of ChatGPT is counter-productive. What is the point of teaching advanced topics to a class or a student that still has not absorbed the foundational materials, yet excels on the tests thanks to technology?

Nevertheless, there exists a role here for ChatGPT — not as a tool to be used during the test, but as a patient and non-judgmental tutor that can help a struggling student make progress and catch up with the rest of the class or to challenge an excellent student with content that might be out of reach for his or her peers.

It is our role as educators to create a culture of psychological safety in our institutions that encourages students to speak up when they are struggling in class rather than faking their level of knowledge with the help of ChatGPT.

The third function of testing is to have students engage with the material. We give students an assignment, be it comparing the views of French Philosopher Jean-Paul Sartre and Albert Camus or describing the political system of Germany in the late 19th century — and then require them to write a five-page essay in response. Our students — hopefully — go to the library, spend hours doing the work and diligently fill the pages. Truth be told, even the best essays created this way will not have any lasting impact on the world. But that doesn’t matter — their role is to have engaged the student and to help lift him or her up to a higher level of knowledge.

Equipped with Chat GPT, of course, students can now shortcut this process and be done with a five-minute time investment. No engagement, no learning — only wasted time for the faculty spent on grading computer-generated essays.

But, how about we think of ChatGPT not as a cheating tool but as an engagement catalyst?

Rather than counting pages, we could provide opportunities. We could have our students engage in a chat (in French, of course) with a virtual Jean-Paul Sartre. Or they could stroll along 19th century Berlin brought to life by AI-based graphics generators such as Midjourney or Dall-E.

We could also use ChatGPT as a tool to help students overcome the common problem of writer’s block. For many students, it is easier to find the mistakes in a text they read than creating their own text. My colleague, Ethan Mollick, has encouraged students to use ChatGPT to come up with ideas for course projects. As with writing, coming up with a creative project idea is hard. Reading ten ideas from an AI bot, in contrast, is not. Ideally, the student gets stimulated by reading through those technology generated ideas and thinks “I can do better than that.” The first step is always the hardest, so why not allow students to use ChatGPT for this?

As we think about creating new testing policies, I encourage fellow educators to work through their curriculum and to question the purpose of each of their tests.

When tests are used for skill certification, ChatGPT needs to be banned or regulated to protect the integrity of professional and other academic degrees.

Tests that provide educators with feedback on student learning also will suffer from the use of ChatGPT, yet the technology can be used to proctor individual students.

Finally, tests that exist to drive student engagement can be augmented by technology, be it by providing immersive learning experiences, by creating prompts and stimuli for ideation, or by other means that neither we nor ChatGPT can imagine.

When I look at ChatGPT beyond its threat of derailing our current testing protocols, I see the opportunity to reimagine the way we teach. If five years from now, we still widely ban ChatGPT in our educational institutions and somehow managed to preserve the status quo, then, in my view, we missed a major opportunity to improve the education of our students.

Christian Terwiesch is a professor of Operations, Information and Decisions at the Wharton School of the University of Pennsylvania and co-director of the Mack Institute for Innovation Management.

Tags A.I. Artificial Intelligence Artificial intelligence Certification ChatGPT cheating Education Education policy Educational technology engagement Learning Students Technology testing

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more