Please ensure Javascript is enabled for purposes of website accessibility

Local universities prepared to teach ethics of using generative AI

Male student on color background

Male student on color background

Male student on color background

Male student on color background

Local universities prepared to teach ethics of using generative AI

Listen to this article

As students begin arriving at colleges and universities to kick off their fall semester, they’ll have tools with them they didn’t have this time last year: ChatGPT and similar generative AI chatbots like Google Bard, Microsoft Bing Chat, and Claude.

How are local schools handling these platforms that have the potential to produce human-like AI-generated content like essays based on the input of the user? You may be surprised.

“I work with many universities and the thing I always tell them is not to over-inflate the threat,” said Jeff Allan, the director of the Institute for Responsible Technology at Nazareth University. “I try to implore that there is a lot of opportunity for good here. You have a tool that can promote critical thinking to a degree you’ve never had before.”

A psychologist and AI technology expert who previously founded a Dartmouth College-affiliated company with Silicon Valley roots, Allan joined the faculty at Nazareth in February 2023, a month after ChatGPT reached 100 million monthly users.

As of June 2023, ChatGPT had over 100 million monthly users and 64.53% of them fit in the 18 to 34 age range, according to recent statistics from Demand Sage.

Jeff Allan

“About a month after I came into the role ChatGPT really blew up,” said Allan, who noted Nazareth was already poised for the arrival of generative AI chatbots thanks to the creation of its Institute for Responsible Technology (originally named the Institute for Technology, Artificial Intelligence and Society). It was formed in 2019 by Dr. Yousuf George, who is now the university’s vice president for strategy and innovation.

“[George] set it up with the idea at the same time that AI was an emerging technology that, in general, was playing a big role in society and was going to have a pretty visible impact on academia,” Allan said. “It was very forward-thinking and it’s the only institution of its type in the country right now.”

The institute’s uniqueness comes from its approach of not only creating technologists who are good at tech but are prepared with the ethical knowledge that surrounds it.

Instructors want to impart the importance of AI ethics to those beyond traditional tech majors too, Allan said.

“AI really isn’t going anywhere so it would be in our best interest and our students’ best interests to learn how to use it ethically,” said Allan, who noted that Nazareth offered its first class this summer that made ChatGPT usage a part of the final grade. “We want students and faculty to embrace it and to find ways to use it without demeaning the educational process.”

Roberts Wesleyan University established an AI task force this spring to begin answering the question of how students, faculty and staff can use generative AI responsibly. The task force also began creating a university-wide policy around usage, which is still in development.

Dr. Marlene Collins-Blair

“We’re being open-minded about this technology,” said Dr. Marlene Collins-Blair, Roberts Wesleyan’s assistant vice president for academic affairs who is part of the AI task force. “To have a banner saying, ‘Don’t use it!’ is not us. We want to be open to the positive aspects of this.”

The task force surveyed faculty in June and found that 80% want to understand how to use AI to best serve students. Through conversations with students, Collins-Blair has also found the vast majority welcome generative AI and parameters for its usage.

“They’re digital natives,” Collins-Blair said. “They want to use it responsibly and we want our students to be AI fluent but have an ethical approach. I believe we can help our students learn ethical and responsible use of AI.”

This semester the university will continue conversations, working groups, and workshops for students, faculty, and staff regarding the responsible use of AI. They also encourage professors to have syllabus statements about their expectations surrounding generative AI use in their classrooms.

At the Rochester Institute of Technology (RIT) Dr. Neil Hair has already embraced the responsible, ethical, and creative use of generative AI in his classroom and assignments, where he believes it can “level the playing field,” especially for students for whom English is not their native language.

Dr. Neil Hair

“To be able to critically evaluate what generative AI has produced is something I think students at RIT are going to walk away from after their four or five years with us,” said Hair, who is the executive director of the Center for Teaching and Learning at RIT. “Faculty have a huge role to play in facilitating that understanding and progression of thinking.”

Hair is also an associate professor of marketing at RIT’s the E. Philip Saunders College of Business and has been awarded five institutional teaching awards for excellence.

To enhance his students’ critical thinking skills and bolster equity and inclusion, he supports his students using generative AI in his classroom and assignments, like literature reviews, if they acknowledge they’ve used it and can explain how.

“Nobody can turn the Internet or AI off,” Hair said. “But there are guardrails we need to put in place so that our students and faculty know how they can use it effectively.”

He says other ways faculty at RIT are using generative AI include building course materials, exploring counterarguments, translating, summarizing research articles, teaching (prompt engineering for example) and their own academic research.

“AI is not going to spell the end of higher education,” Hair said. “It raises the importance of academics in the conversation. Faculty are absolutely critical in understanding and showing students how to use these tools properly.”

Caurie Putnam is a Rochester-area freelance writer.