#genAI
> [!info]+ Transparency on genAI use
> I have used ChatGPT (5.2) and Claude (Sonnet 4.5, 4.6) as a **copyeditor** across this site to help me revise and reflect on **what I have written**. I am not using AI tools however to produce written content for direct use. **Content I share with AI tools originates with me**.
# Integrating generative AI (genAI)
Starting January 2023, genAI became something that I focused much effort towards learning about and creating educational programming. I have incorporated it into my work, given presentations about it, and designed and delivered multiple workshops on it. I continue to be an advocate for **using it to strategically improve and transform what we do**. But I have concerns, particularly around *cognitive unloading*.
While I was at The George Washington University (GWU) as the Director of Strategic Digital Learning Initiatives, I was tasked to create the strategy and lead the response to genAI for the central library. I was amazed in November 2022 when [ChatGPT was introduced](https://openai.com/index/chatgpt/). My work led to partnerships across the organization driving conversations and upskilling. And within my team, focusing on how we could use AI to enhance our capabilities, workflows, and deliverables.
As I look into the future, genAI is something that I have deep interest and caution in. I have benefited from using AI to improve my writing, thought exploration, solving coding challenges, and some general study. But I've also found myself trusting it too much and forgetting at times, it doesn't understand context or human emotional response. Nonetheless, I keep using it because it is the future of work.
## Uses of genAI
I've been asked a few time, "How do you use AI?" Below are some of the ways that I have and are using AI tools. Beyond these, I like anyone else has a few stories around use.
1. Refreshing my memory of concepts within my domain.
2. Discovering new developments within my domain.
3. Exploring and interlinking knowledge outside my expertise.
4. Searching for solutions through online databases.
5. Creating podcasts from digital books and research articles.
6. Outlining draft documents and presentations.
7. Generating images for presentations, websites, and produced video.
8. Designing lesson plans, rubrics, and potential learning activities.
9. Identifying parts of circuit boards and other hardware.
10. Generating discussion questions for design meetings and learning activities.
11. Exploring human behaviors and decision-making.
12. Identifying keywords and themes in qualitative survey analysis.
13. Evaluating project management workflows.
14. Generating and iterating on interview questions.
15. Verbally ideating through ambiguous problems.
16. Doing design thinking as part of course and multimedia design.
17. Solving problems within HTML, CSS, and Python.
18. Tutoring me on technical skills.
19. Helping to identify frameworks and models from a JPEG file.
20. Serving as a copyeditor.
21. Producing a summary and list of skills from training/course notes.
22. Taking on roles of people I am digitally communicating with.
> [!question]+ Developing AI agents
> Like many people I've been studying and have been interested in setting up my own AI agent(s) with [OpenClaw](https://openclaw.ai/). Rather than experimenting, I've decided to wait until I have decided on clear purposes and can purchase a dedicated MacMini to run it. Running AI agents on my primary machine is too much risk with my core data. I've read too many stories of data getting wiped out by AI.
## Where I stand with AI
**I am an AI optimist**. Although admittedly, I'm getting a bit scared at time and my mind drifts to some of my favorite science fiction. I am cautiously optimistic however in how it may accelerate learning and effectiveness.
**I generally see AI as a partner in work and learning activities**. I use AI tools for a start or as an editor. ==Whether an AI tool copyedits my work, or produces a draft, my practice is to always reflect on the response and work through the AI produced content to ensure truthfulness, authenticity, and my/human tone. If I'm working with a bit of code, I study why something it gave me worked.==
> [!important]+ Cognitive offloading
> As I have worked deeper with AI in writing and learning I have increasingly encountered situations where I've thought "wow, that feedback is perfect" (from AI). But at the same moment I thought it was acceptable, a part of my mind said, "that doesn't really capture the context, nor does it communicate to humans."
>
> I have been struggling with 'cognitive offloading' and trying to understand, when the best time is for me to step back, give AI less, and do the hard thinking. The old-fashioned way. This problem is of significant interest to me.
>
> If you have the same interest, check-out the [MIRA Lab](https://mira-yu.org/) at the University of North Texas. They are ramping up some good work on this problem.
## Practice with AI
There are many general rules and **best practices** for using AI. My instinct however is to take these different guides with a grain of salt because of how quickly the technology changes. Nonetheless, I have some general rules of thumb and a couple of frameworks I like to reference to when creating prompting strategies.
### Rules of thumb
- Reflect on and revise **everything** AI provides.
- Ensure language used is truthful and not exaggerated.
- Take a moment to think about what you're giving up.
- Re-write proposed AI language to fit your tone and self.
- If working with code, review it, check it, and understand what you have.
- Let others know when you use it.
### Frameworks
It can be very helpful to study and practice one or two prompting frameworks to maximize results. This can help you get the most value out of the tools, but this is also a new skill that is needed for the AI economy. The following frameworks I learned over the past couple of years and ones that I go back to when I'm working with AI to accomplish various tasks.
#### CARE Framework
There are actually two frameworks — one for [prompting/interaction](https://www.nngroup.com/articles/careful-prompts/) and one for governance. I became aware of the prompting framework while working in an academic context and later became aware of the governance. You can use the prompting one to build a starting prompt to iterate on.
1. **Context**
- Describe the situation. Tell the AI your role, its role, project background, and target audience.
2. **Action**
- Request specific action. Tell the AI clearly what you want to receive.
3. **Rules**
- Provide constraints. Give the AI quantities, quality expectations, and standards to follow.
4. **Examples**
- Demonstrate what you want. Provide the AI with reference material and/or tell it what is a good and a bad output.
#### CLEAR Framework
I too became aware of this framework while working on AI literacy in an academic setting. Unlike quality alone, this framework can be used to maximize efficiency. This is an established framework and a good place to start prompting.
1. **Concise**
- Focus on the words that you want the AI to give attention to. Keep it simple, be direct, and remove human niceties such as "please" and "thank you".
2. **Logical**
- Have clear relationships between words and ensure that the flow of the prompt is smooth. AI is not emotional and will take you literal.
3. **Explicit**
- Provide clarity on what you're seeking. Be direct, explicit, and provide the most relevant details.
4. **Adaptive**
- Assess the output received. Then adjust and rephrase your original prompt with output. Change your approach by revising the C-L-E steps.
5. **Reflexive**
- Reflect on where you landed with AI. Critically evaluate for accuracy, tone, relevance/quality, truthfulness, etc. Consider trying another tool, reworking what AI provided, or not using AI all together.
Note: Research has shown that adding kind words and motivating statements causes AI to perform better.
Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720. URL: [https://doi.org/10.1016/j.acalib.2023.102720](https://doi.org/10.1016/j.acalib.2023.102720)
#### TCREI Framework
This is a pattern that I learned from the training [[Google Prompting Essentials]]. It is simple and very useful when applied to cases where you have a clear goal for using AI. You should build prompts in the order of the acronym.
1. **Task**
- Describe the task that you want the AI to help you with including the expected persona and output format.
2. **Context**
- Provide the necessary details you need from AI. Quantify what is possible.
3. **References**
- Things that AI should use when creating output. These can be reference files within projects with tools like Anthropic Claude and OpenAI ChatGPT.
4. **Evaluate**
- Ask if the output give you what you need. Consider how you might rephrase part of your prompt, such as how you may ask a question in another way.
5. **Iterate**
- Have a conversation with AI. Add more context and/or references that you've provided in your original prompt.