My Interview with ChatGPT: AI & Ethics, Academic Integrity, and Student Leadership

Emily IhrkeLeadership Programs, Pedagogy of Leadership Interview, Student Leadership

I’ve spent the majority of my life as a teacher of writing. Twenty six years in the laborious art and science of helping students learn to write and write to learn. I care deeply about student thinking and student expression. 

For a time I led NCTE’s Commission on Composition, and I’ve led other initiatives at the state, regional, district, and school levels. Student writing matters to me. So, too, do academic integrity and the belief that we should help each student develop not just a critical and creative apparatus but also an ethical core.

So, AI? ChatGPT? And ChatGPT-4?

Oh my.

 Yet, it’s here. 

To a certain extent, it already has been. The rough beast slouched toward Bethlehem long ago. Spell check was invented in 1971, a few years before I was born. 

The tools we’ve been using to write have been evolving and moving us ever closer to this moment since. Word processing in 1976. Microsoft Word 1.0 in 1983. Ten years later Microsoft Word added the autocorrect feature. GoogleDocs came online in 2006. Of course, I’m only naming a few. 

Recently, after reading, thinking, conversing, and experimenting with students and colleagues, I decided to interview ChatGPT for this blog. 

I asked it a lot of questions. For the purpose of this piece, at the end, I’m including just three of those questions and the AI-generated responses. See what you think. How does ChatGPT do on topics of AI & ethics, academic integrity, and student leadership, all in the context of K12 schools? Does it seem the center can hold? 

Always, it’s been part of our job as educators to help our students ask meaningful questions. It’s critical in learning to write, writing to learn, figuring out what we actually think and believe, and learning to lead.

As we know, some schools and districts have banned students from using tools like ChatGPT altogether. That is one approach. Does prohibition work? Does it put a stop to the questioning and experimenting? Or does it, at minimum, send a message?

Others are embracing the use of generative AI tools. Kate Darling of MIT’s Media Lab suggests, “ChatGPT and other AI-based language applications could be, and perhaps should be, integrated into school education. Not indiscriminately, but rather as a very intentional part of the curriculum. If teachers and students use AI tools like ChatGPT in service of specific teaching goals, and also learn about some of their ethical issues and limitations, that would be far better than banning them.”

Darling’s assertion is worth considering. While I agree we absolutely should invite students into conversations about AI, and while I know many students at our school are are using ChatGPT and ChatGPT-4, sanctioned or unsanctioned (this bell has been rung…), the case Darling makes about integrating AI-based language applications into the curriculum and the AI-generated responses themselves don’t fully take into account, in my mind, some of the other realities of this historical moment. 

Each day the news gives us more to weigh. More names, including really consequential ones, added to the letters calling for a six-month moratorium on the development of new systems based on the belief that AI technologies pose “profound risks to society and humanity.” Chatbot hallucinations, bots that are getting closer to reading our minds, killer robots (and those are just a few items from this morning’s New York Times).

With AI there are real questions, those beyond the academic ones: psychological, social, economic, and existential. As the landscape continues to change–exponentially this time–, let’s make space for digging into these complex questions and challenging conversations with each other and with student leaders.

Don’t Ban ChatGPT in Schools. Teach With It.

ChatGPT: Everything you need to know about OpenAI’s GPT-4 tool

How AI is about to change your job – and not for the better

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

When A.I. Chatbots Hallucinate

Video, A Brief History of Artificial Intelligence

What are the ethical considerations that educators should have in mind regarding AI, particularly tools like ChatGPT?

Educators have an important role to play in ensuring that the use of AI tools like ChatGPT aligns with ethical considerations. Here are some key ethical considerations that educators should keep in mind:

  1. Transparency: Educators should be transparent about the use of AI tools and how they are being used to support learning. This includes explaining how the AI tool works, what data it uses, and how it makes decisions.
  2. Bias: Educators should be aware of the potential for AI tools to perpetuate biases and discrimination. They should take steps to ensure that the AI tool is trained on unbiased data and that its decisions are fair and equitable.
  3. Privacy: Educators should respect the privacy of their students and ensure that any data collected by the AI tool is handled in accordance with relevant privacy laws and policies.
  4. Responsibility: Educators should be responsible in their use of AI tools and ensure that they are using them to support learning in a meaningful and ethical way.
  5. Pedagogy: Educators should consider the pedagogical implications of using AI tools like ChatGPT. They should ensure that the tool is being used to support learning in a way that aligns with their educational goals and principles.
  6. Evaluation: Educators should evaluate the effectiveness of AI tools in supporting learning and ensure that they are making a positive impact on student learning outcomes.

Overall, educators should use AI tools like ChatGPT in a way that aligns with ethical principles and supports student learning in a meaningful way. They should be transparent, mindful of bias, respect privacy, be responsible, consider pedagogy, and evaluate the impact of AI on student learning outcomes.”

What are the best ways to address concerns regarding ChatGPT and academic integrity in a K12 setting? For instance should students be told to 1) ask whether they may use it, and 2) cite the source if they do? 

  1. “Explicitly state whether students are allowed to use ChatGPT for specific assignments or tasks. If it is not allowed, communicate this clearly to students and explain the reasons behind the decision.
  2. If ChatGPT is allowed, provide guidance on how it should be used ethically. Students should be instructed to use ChatGPT as a tool for generating ideas and insights, but not as a replacement for critical thinking or independent work. They should be encouraged to verify the information generated by ChatGPT and to use it in a responsible and ethical manner.
  3. Students should be instructed to cite the source of any content generated by ChatGPT, just as they would for any other source of information. This includes both in-text citations and a reference list at the end of the assignment.
  4. Teachers should monitor students’ use of ChatGPT to ensure that it is being used appropriately and ethically. They should be available to answer questions and provide guidance as needed.
  5. Finally, students should be reminded of the importance of academic integrity and the consequences of plagiarism or cheating. They should be encouraged to approach the use of ChatGPT with the same level of academic integrity as any other assignment or project.

By following these best practices, educators can help ensure that ChatGPT is used in a responsible and ethical way, and that academic integrity is maintained in the K-12 classroom.”

To what extent is it important to engage student leaders in K12 settings around the subject of AI? Why?

“It is increasingly important to engage student leaders in K-12 settings around the subject of AI for several reasons:

  1. AI is becoming more pervasive in our daily lives: AI is being used in a growing number of industries, including education, healthcare, finance, and transportation. As future leaders, students need to understand the impact of AI on these industries and how it can be used to solve complex problems.
  2. AI raises important ethical questions: AI presents ethical challenges, such as bias, privacy, and accountability. By engaging student leaders in discussions and activities around AI, they can develop a deeper understanding of these issues and learn to approach them in an ethical manner.
  3. AI is a key driver of innovation: AI has the potential to transform the way we work, learn, and live. By engaging student leaders in learning about AI, they can develop the skills and knowledge necessary to leverage this technology in innovative ways.
  4. AI is a growing field with many career opportunities: By engaging student leaders in AI, we can prepare them for careers in this field, which is projected to grow rapidly in the coming years.

In summary, engaging student leaders in K-12 settings around the subject of AI is important to prepare them for the future, help them develop a deeper understanding of ethical issues related to this technology, and enable them to leverage AI to solve complex problems and drive innovation.”

(ChatGPT, personal communication, April 28-29, 2023).

A three-time graduate of the University of Wisconsin-Madison, Emily Tymus Ihrke is an award-winning teacher. She’s taught in public and independent schools, in upper schools and middle schools, in English classrooms, school libraries, and the learning center. Emily has been a coach, mentor, department chair, and dean, and she is the founding director of University School of Milwaukee’s PK12 Ethical Leadership Program. She loves learning, dancing, and her 11-year-old daughter.