#genAI > [!info]+ Transparency on genAI use > I have used ChatGPT and Claude as a **copyeditor** across this site to help me revise and reflect on **what I have written**. I am not using AI tools however to produce written content for direct use. **Content I share with AI tools originates with me**. # Integrating AI (AI) Starting January 2023, AI became something that I focused much effort towards learning about to create educational programming focused no **AI Literacy**. I have incorporated it into my work, given presentations about it, and designed and delivered multiple workshops on it. I continue to be an advocate for **using it to strategically improve and transform what we do**. But I have concerns, particularly around *cognitive unloading*. While I was at The George Washington University (GWU) as the Director of Strategic Digital Learning Initiatives, I was tasked to create the strategy and lead the response to genAI for the central library. I was amazed in November 2022 when [ChatGPT was introduced](https://openai.com/index/chatgpt/). My work led to partnerships across the organization driving conversations and embrace of changes to work, teaching, research, and creativity. Then within my team, I asked, "*How can we use AI to enhance our capabilities, workflows, and deliverables?*" Today, I have a deep interest and caution in AI. I have benefited from using AI tools to refine my writing, explore thoughts, solve coding challenges, and begin some study. But I've also found myself trusting it too much and forgetting at times that it doesn't understand context or human emotional response. Nonetheless, it is our future partner. ## AI Literacy I highly recommend anyone to follow the work of Dr. Leo Lo, the Dean of Libraries at the University of Virginia. He is without question, one of the best academics I've seen able to interpret the foundations of AI to both what it means and what we need to learn. In fact, the CARE Framework below, is his creation. According to Lo (2025), "*AI literacy is the ability to understand, use, and think critically about AI technologies and their impact on society, ethics, and everyday life. This broad definition encompasses several interconnected components, each essential for developing a well-rounded understanding of AI.*" This leads to 5 strands of AI literacy: 1. **Technical knowledge**: Understanding the basic concepts and functions of AI technologies and how they operate. 2. **Ethical awareness**: Recognizing the ethical challenges and dilemmas that arise from the use of AI. 3. **Critical thinking**: Analyzing AI’s role, benefits, and potential risks in various contexts. 4. **Practical solutions**: Applying AI tools and techniques effectively in real-world situations. 5. **Societal impact**: Understanding how AI influences and shapes societal structures, norms, and future developments. Whereas in the past the challenge of study and research was finding information and the answer to a problem, the new challenge if **verification** of the answer received. It is also where the critical internal challenges of **trust** and **cognitive unloading** appear. Within this new area of study, big theme arise such as: - How should AI and humans partner to create, build, and explore without replacing human abilities? - How can AI literacy be built-into lifelong learning so that people continue to become better with AI collaboration over time? - How can institutions, schools, governments potentially provide equity in AI access to people in their communities? ## Uses of genAI In regards to how I use AI, below are some of the do and have done using AI tools. 1. Refresh my memory, discover research, and discover connections between concepts within my domain. 2. Explore and interlink knowledge outside my expertise. 3. Search solutions to problems across online databases. 4. Create podcasts from digital books and research articles. 5. Create draft outlines for documents and presentations. 6. Generate and edit images for presentations, websites, and produced video. 7. Design lesson plans, rubrics, and brainstorm potential learning activities. 8. Generate discussion questions for design meetings and learning activities. 9. Generate questions from job descriptions and listed competencies for interviews. 10. Explore human behaviors and psychology to identify possible pathways. 11. Analyze survey data and explore potential themes. 12. Evaluate project management workflows. 13. Verbally ideate through ambiguous problems and creative challenges within a team. 14. Do design thinking as part of course and multimedia design. 15. Solving problems within HTML, CSS, and Python. (Saving days of work!) 16. Serve as a learning partner on technical skill development. 17. Help to identify unknown frameworks and models from image files. 18. Serve as a copyeditor to original writing. 19. Produce a summary and list of skills from training/course notes. > [!question]+ Developing AI agents > Like many people I've been studying and have been interested in setting up my own AI agent(s) with [OpenClaw](https://openclaw.ai/). At this point, I'm building resources that I want it to use, thinking about and learning about writing 'skills', and importantly holding off for when I can purchase a dedicated MacMini to run it. Can't be too over cautious with giving autonomous AI agents. ## Where I stand **I see AI as a partner in work and learning**. I use AI tools to start an idea, initialize some exploration, and to edit writing. ==In practice, whether an AI tool copyedits my work, or produces a draft, I always reflect on the response and work through the AI produced content to ensure truthfulness, authenticity, and my/human tone. If I'm working with a bit of code, I study why something it gave me worked.== > [!important]+ Cognitive offloading > As I have worked deeper with AI in writing and learning I have increasingly encountered situations where I've thought "wow, that feedback is perfect" (from AI). But at the same moment I thought it was acceptable, a part of my mind said, "that doesn't really capture the context, nor does it communicate to humans." > > I have been struggling with 'cognitive offloading' and trying to understand, when the best time is for me to step back, give AI less, and do the hard thinking. The old-fashioned way. This problem is of significant interest to me. > > If you have the same interest, check-out the [MIRA Lab](https://mira-yu.org/) at the University of North Texas. They are ramping up some good work on this problem. ## Practice with AI There are many general rules and **best practices** for using AI. My instinct however is to take these different guides with a grain of salt because of how quickly the technology changes. Nonetheless, I have some general rules of thumb and a couple of frameworks I like to reference to when creating prompting strategies. ### Rules of thumb - Reflect on and revise **everything** AI provides. - Ensure language used is truthful and not exaggerated. - Take a moment to think about what you're giving up. - Re-write proposed AI language to fit your tone and self. - If working with code, review it, check it, and understand what you have. - Let others know when you use it. ### Prompting Frameworks It can be very helpful to study and practice one or two prompting frameworks to maximize results. This can help you get the most value out of the tools, but this is also a new skill that is needed for the AI economy. The following frameworks I learned over the past couple of years and ones that I go back to when I'm working with AI to accomplish various tasks. #### CARE There are actually two frameworks — one for [prompting/interaction](https://www.nngroup.com/articles/careful-prompts/) and one for governance. I became aware of the prompting framework while working in an academic context and later became aware of the governance. You can use the prompting one to build a starting prompt to iterate on. 1. **Context** - Describe the situation. Tell the AI your role, its role, project background, and target audience. 2. **Action** - Request specific action. Tell the AI clearly what you want to receive. 3. **Rules** - Provide constraints. Give the AI quantities, quality expectations, and standards to follow. 4. **Examples** - Demonstrate what you want. Provide the AI with reference material and/or tell it what is a good and a bad output. #### CLEAR I became aware of this framework while working on AI literacy in an academic setting. Unlike quality alone, this framework can be used to maximize efficiency. This is an established framework and a good place to start prompting. (Note that despite the guidelines for 'Concise', research has shown that adding kind words and motivating statements causes AI to perform better.) 1. **Concise** - Focus on the words that you want the AI to give attention to. Keep it simple, be direct, and remove human niceties such as "please" and "thank you". 2. **Logical** - Have clear relationships between words and ensure that the flow of the prompt is smooth. AI is not emotional and will take you literal. 3. **Explicit** - Provide clarity on what you're seeking. Be direct, explicit, and provide the most relevant details. 4. **Adaptive** - Assess the output received. Then adjust and rephrase your original prompt with output. Change your approach by revising the C-L-E steps. 5. **Reflexive** - Reflect on where you landed with AI. Critically evaluate for accuracy, tone, relevance/quality, truthfulness, etc. Consider trying another tool, reworking what AI provided, or not using AI all together. #### TCREI This is a pattern that I learned from the training [[Google Prompting Essentials]]. It is simple and very useful when applied to cases where you have a clear goal for using AI. You should build prompts in the order of the acronym. 1. **Task** - Describe the task that you want the AI to help you with including the expected persona and output format. 2. **Context** - Provide the necessary details you need from AI. Quantify what is possible. 3. **References** - Things that AI should use when creating output. These can be reference files within projects with tools like Anthropic Claude and OpenAI ChatGPT. 4. **Evaluate** - Ask if the output give you what you need. Consider how you might rephrase part of your prompt, such as how you may ask a question in another way. 5. **Iterate** - Have a conversation with AI. Add more context and/or references that you've provided in your original prompt. ## References Lo, L. S. (2025). *AI Literacy: A Guide for Academic Libraries*. https://crln.acrl.org/index.php/crlnews/article/view/26704/34626 Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720. URL: [https://doi.org/10.1016/j.acalib.2023.102720](https://doi.org/10.1016/j.acalib.2023.102720)