I grip my pencil tightly as I scribble down what can only be described as a rudimentary summary of what I studied the night before. The side of my hand begins to cramp, but I don’t have enough time to shake it out. I’m writing yet another in-class essay — a type of assessment increasing in frequency at the high school.
Since ChatGPT launched in 2022 (quickly followed by other chatbots), teachers have been phasing out long-term projects in favor of closed note, timed assignments that must be completed during a single class period. In-class essays, a common assessment under this new structure, require students to construct a claim, recall evidence, and provide adequate reasoning to support their thesis, all within the allotted hour-long block.
Despite issues with these assignments, the transition to them makes sense. Educators rightfully fear that students may misuse AI if they assign heavily weighted assignments outside of class. At the high school in particular, students tend to prioritize grades over learning due to the high-pressure environment. Thus, if AI helps them get a better grade, they might be inclined to use it, sacrificing their academic integrity in the process.
Currently, the high school’s Student Handbook includes just one sentence about AI: “Students may not use an artificial intelligence program to aid their work on an assignment unless explicitly directed to do so by their instructor.” Essentially, without teacher direction to use AI, students are prohibited from using the technology for that class.
Although this policy discourages students from violating their academic integrity, it also prevents them from using AI for more mundane purposes, such as reviewing for tests. For multiple choice assessments in particular, AI can create mock-tests for students to practice with.
This alludes to the current system’s crucial flaw. If teachers don’t address every potential use of AI, innocent or not, and “explicitly direct” a student as to when they can use it, innocuous applications of AI still violate the handbook. Thus, a student, technically, could be subject to discipline for a harmless usage of the technology.
To improve the system, the administration’s first step should be destigmatizing AI. When the general public received its first glimpse into ChatGPT, the overwhelming reaction from teachers was characterized by alarm and uncertainty. Today, many educators at the high school still harbor misgivings about AI.
One consequence of this fear is the rise of in-class assessments. However, these have issues of their own.
Typically, teachers don’t provide prompts for these assignments ahead of time. This means that students are left to study a multitude of topics, increasing their workload.
On top of this, the short duration of in-class assignments inherently decreases students’ quality of thinking. Long-term projects and essays typically encourage students to think outside the box. Removing them from the assignment pool discourages them from reflecting deeply and creativity as they are unable to consider different angles of a prompt outside of class.
Another consequence of the AI policy are some teachers’ outright, “no-exception” bans on the technology in their classrooms. By allowing teachers to enact these bans, perhaps without adequate understanding of the technology, the administration enforces a one-dimensional outlook on AI. Therefore, instead of putting the issue completely up to teacher discretion without ensuring that they are properly informed about the benefits and detriments of AI, the school should facilitate open dialogue about the technology and its influence on education — including how it can be helpful.
Of course, the high school’s primary responsibility is to prepare its students for life after secondary education. Due to AI’s rapid development in recent years, students’ lives will inevitably revolve around the technology in one way or another. Yet, the current AI policy does not adequately ensure their readiness for future experiences with the technology. Some students at the high school completely lack exposure to AI, as the policy permits teachers to completely prohibit them from using it.
Even so, the current policy offers just enough room for educators to establish their own principles on AI while also protecting academic integrity. For now, that’s enough. Nonetheless, it certainly leaves something to be desired regarding the future.
That is to say, while the principle of academic integrity is indisputably important and central to the school’s mission, what constitutes such integrity should be redefined through the lens of AI. The school and its teachers should reconsider how they classify “cheating” with AI.
With this objective in mind, several educators have already begun experimenting with AI in their classes. In particular, some teachers allow students to use AI as a writing assistant on select assignments (with guidelines in place regarding disclosure and citations), while others have introduced a new type of assessment: having students correct and grade work created by AI. These types of assignments help students observe the strengths and weaknesses of AI, and, overall, they help facilitate productive discourse.
But these isolated efforts don’t represent the school’s current ideals. Fundamentally, the AI policy suits its current needs. However, in order for it to fulfill its potential, each member of the community must have at least some understanding of AI.
Nobody disputes that AI is here to stay. The question now is how we will continue adapting to its growing influence and capabilities: should we shun the technology or grow alongside it?