Please ensure Javascript is enabled for purposes of website accessibility

Rethinking academic integrity and ethics in AI | Viewpoint

Rethinking academic integrity and ethics in AI | Viewpoint
Rethinking academic integrity and ethics in AI | Viewpoint

Rethinking academic integrity and ethics in AI | Viewpoint

Listen to this article

Summary:

Graham Anthony

Academic integrity is one of the most urgent conversations in higher education right now, and also one of the most misunderstood. The instinct across many institutions has been to treat AI as a problem to detect and police. However, the more honest conversation is about what AI is revealing about the assessments we’ve been using all along.

In some disciplines, the integrity question is relatively straightforward. A math proof still requires a math proof. In disciplines where AI tools can credibly produce student-quality work the disruption runs deeper. When AI can generate a passable career development reflection essay, it forces us to ask what that assignment was actually measuring. The real value of a career reflection isn’t polished prose. It’s the student wrestling with their own lived experiences and taking agency over where they’re headed. AI can produce the artifact, but it can’t do the thinking. The best evaluators in these disciplines already know this. A colleague recently flagged a student’s lack of growth across reflections as evidence of disengagement, a judgment that requires knowing your students individually. That kind of assessment is harder to scale, but it’s exactly the kind AI can’t replicate.

The same shift is happening in technical fields. Programming education, for example, is moving away from writing code as the primary demonstration of competence and toward the critical analysis that surrounds it. Examples include what to build, how to architect it, and which tradeoffs matter. The skill is increasingly about algorithmic thinking and judgment, not syntax. Critical thinkers will decide whether the problem is worth solving in the first place, what a holistic solution looks like, and develop a framework for the solution.

This is the thread that connects academic integrity to the broader argument I’ve made previously in this publication about rethinking in the AI era. As intelligence itself becomes a commodity, the skills that gain value are the ones AI can’t replicate: critical thinking, contextual judgment, and the ability to evaluate competing priorities under uncertainty. That’s true for executives, and it’s equally true for students. The question for educators is whether our assessments develop and measure those capabilities, or whether we’ve been testing recall and calling it rigor.

Shifting toward more critical thinking in assessment is possible, but it isn’t painless. Active learning environments that force students to engage with problems in real time, defend their reasoning, and iterate offer a strong model. In-person defenses of student work, collaborative problem-solving, and live application exercises push students past the point where AI-generated content can carry them. But this approach has real limitations. It relies on synchronous interaction, which makes asynchronous and fully online learning formats less effective at replicating the same depth. That’s a tension the field hasn’t resolved, and pretending otherwise doesn’t help.

There’s another tension worth naming. Many of the assessment strategies that encourage critical thinking such as oral defenses, live presentations, and interactive discussions also make student work less anonymous. Higher education has spent decades building bias mitigation into evaluation techniques like blind grading. A move toward more personal, more human forms of assessment opens the door to the very biases those systems were designed to prevent. This isn’t a reason to abandon the shift, but it does mean institutions need to design new safeguards with their eyes open.

Ethics is the natural bridge between these challenges. Students that grapple with the ethical implications of AI such as analyzing who is affected by a model’s predictions, what happens when automation displaces judgment, and where accountability sits when things go wrong are doing exactly the kind of critical thinking we want. Ethical reasoning can’t be reduced to a formula. It requires students to analyze specific contexts, weigh competing values, and defend a position. Doing this in real time is not something AI produces well because it demands the human judgment and presence that makes the analysis meaningful.

At Golisano Institute for Business and Entrepreneurship, we layer ethical discussion into every course that touches AI, not as a separate module but as an integrated part of how students apply these tools to real business problems. When a student is building an AI solution for a real organizational challenge, the ethical questions aren’t abstract. They’re embedded in the work itself. That integration naturally requires critical thought, which is exactly where the learning lives.

Academic integrity in the age of AI isn’t a detection problem. It’s a design problem that challenges educators to build assessments worthy of the thinking we want students to do.

Graham Anthony is Assistant Vice President for Educational Technologies and Innovation at Golisano Institute for Business & Entrepreneurship.

d