> [!summary]+ Summary > This page explains a workshop designed to engage participants in discussion around ethical challenges associated with genAI in teaching and learning. Using scenarios based on concerns I was hearing at the time – student resistance, energy use, GPT tutors — participants were asked to air out thoughts. (See the bottom of this page for the results of this workshop.) # Ethical Issues and Use with genAI **Delivery details:** <u>Date</u>: October 10, 2024 <u>Target audience</u>: University faculty, staff, and students <u>Delivery format</u>: Remote/Zoom <u>Duration</u>: 60 minutes ## About the workshop This workshop was inspired by the multiple and ever compounding (reasonable) concerns from faculty, staff, and students on the use of generative Artificial Intelligence (genAI). Some of the most common were the potential for loss of critical thinking, the potential for mis-information from LLMs, the risks to data privacy, inherent bias in AI systems, the lack of transparency in how LLMs work, the replication of creative human works, and the loss of work/jobs. But also of great importance assuming that genAI use would be allowed in classrooms, is/was equitable access to the technology. Some people can and will pay for *better* models, and others cannot or will not. Regardless, there was broad acknowledgement that genAI will be in the classroom and students will be using it. *How are they taught to use it ethically?* This was and remains the driving question for instructors. This workshop asked participants to engage in some challenging discussion on the use of genAI with variations of scenarios. These were each influenced by conversations I was having at the time around genAI. There were no right or wrong answers, just potential approaches. To my surprise, this led to a lot of caution in faculty responses. This was odd. In some forums faculty are very vocal, and in others they are very quiet. ### Description > In this workshop, we will examine the ethical considerations and responsibilities associated with using generative AI in educational settings. We will discuss key challenges including accuracy and bias, catching and coaching, equity of access, appropriate use, and privacy and data protection. This session will be a space to air some of the challenges you/instructors are facing in the classroom, but aim to provide solutions that enable you to guide students on appropriate use and strategies for building this into classroom discussions and student learning activities. This workshop is introductory, some experience working with genAI tools (e.g ChatGPT) is useful but is not required. ### Learning objectives 1. Identify strategies for catching inappropriate student use of genAI tools and ways to coach on the correct behaviors. 2. Relate issues across identified challenges and assess strategies to address those issues. 3. Formulate a strategy for incorporating discussions on ethics and appropriate AI use into classroom activities and student assignments. ### Workshop goal **The goal was for participants to engage in discussion around the (potential) use of genAI tools in educational contexts – classroom, discussion forums, group work.** It asked participants to reflect, give meaningful thought to challenges associated with genAI and education, and engage in discussion around a series of scenarios. ## Slide deck <div class="container"><iframe class="responsive-iframe-sd" src="https://1drv.ms/b/c/13829E5D2EB238DE/IQQ91KD5WbvGQYhuRWfq7CKtAU34-urAkrgN2iYm2L_zEoo" width="100%" height="400" frameborder="0" scrolling="no"></iframe></div> *Note: These slides were built with a custom slide deck that I made using Microsoft PowerPoint. Generative Artificial Intelligence (genAI) was not used. All stock images were provided by [Adobe Stock](https://stock.adobe.com) and [Getty Images](https://www.gettyimages.com).* ## 🎯 Results This workshop opened up conversation on the different perspectives of ethical use with genAI. The 3 concerns addressed were commonly heard from faculty and staff. Participants reported better understanding why they needed to bolster their AI literacy, as well as the need to be able to speak with their students on thorny ethical issues. ## Resources The following were used to research the content of this workshop: - Chowning, J. T., & Fraser, P. (2007). *[An ethics primer](https://nwabr.org/sites/default/files/NWABR_EthicsPrimer7.13.pdf).* Seattle: Northwest Association of Biomedical Research. - [Han, Wenkai & Schulz, Hans-Jörg. (2020). Beyond Trust Building — Calibrating Trust in Visual Analytics. 9-15. 10.1109/TREX51495.2020.00006.](https://www.researchgate.net/figure/The-trust-continuum-extended-on-the-model-of-Cho-et-al-8_fig1_348052963) - Long, D., & Magerko, B. (2020). *What Is AI Literacy? Competencies and Design Considerations.* In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Association for Computing Machinery. URL: [https://doi.org/10.1145/3313831.3376727](https://doi.org/10.1145/3313831.3376727) - Pike, D., McGowin, B., Bond, A., Cox II, L., and Williams, D. [*Framing Generative AI in Education with the GenAI Intent and Orientation Model](https://er.educause.edu/articles/2024/6/framing-generative-ai-in-education-with-the-genai-intent-and-orientation-model)*. EDUCAUSE Review. June 6, 2024 - Ruediger, D., Blankstein, M., & Love, S. (2024, June 20). Generative AI and Postsecondary Instructional Practices: Findings from a National Survey of Instructors. URL: [https://doi.org/10.18665/sr.320892](https://doi.org/10.18665/sr.320892) - U.S. Department of Education, Office of Educational Technology, Designing for Education with Artificial Intelligence: An Essential Guide for Developers, Washington, D.C., 2024 [ARCHIVED](https://1drv.ms/b/c/13829E5D2EB238DE/EfGWvkql5qFNlYagqZMQYlYBlg_eHq7-0iM7h3IC2j4egw?e=KQ0qIM) - Vial, G., Crowe, J., and Mesana, P. *[Managing Data Privacy Risk in Advanced Analytics](https://sloanreview.mit.edu/article/managing-data-privacy-risk-in-advanced-analytics/).* MIT Sloan Management Review. June 11, 2024.