Navigating Chaos: How Mission-Driven Organizations Can Thoughtfully Utilize AI
Staying agile, resilient, and values-focused amidst turbulence
TL;DR: As the social sector faces unprecedented challenges, nonprofits will be under even more pressure to do more with less. This post explores how GenAI, when approached thoughtfully and cautiously, can serve as a strategic tool for mission-driven organizations.
This is…a LOT to process
Mission-driven organizations are currently navigating extraordinary uncertainty. As the past few weeks show, nonprofit organizations will face constantly shifting priorities and will likely have to continue doing more with less simply to survive. We believe there are ways mission-driven organizations can make strategic use of GenerativeAI tools right now to help navigate the onslaught of challenges.
As Jewyla Lynn noted in her recent post, AI can serve as an 'augmentation tool in systems change' (more than just a productivity assistant — thank you!). She discusses how we can leverage AI to: do what we as humans simply can't do, what we can't do because of resource constraints, or do what we can't prioritize. The goal of this onslaught of harmful executive orders and chaos is that it does exactly this to our brains and bodies — it asks the impossible, under new and unreasonable constraints, and upends our priorities and how we live out our values through our actions.
While we are currently grappling with our own fears related to AI (seriously, they are growing everyday) — everything from privacy, ethics, human rights, labor conditions, and sustainability — and thus mindful that writing about how AI might help mission-driven organizations in this moment is not a benign inquiry, we’re also often quite pragmatic. It's a tool. Yes, big tech is bad in so many ways. AND, today, right now, we're all humans trying to do the best we can with what we've got, so how can we use this tool to help in this moment? And, how can we use our experimentation with these tools to deepen our understanding of how they work, where the problems are, and what we need to advocate for? The more we use these tools, the more we learn about the tool itself. So, how can we use this learning for good? [For a deeper dive into what’s possible if we reimagine AI in ecosystem driven, feminist, decolonial ways, see this amazing report]
Set Yourself Up for Success
So, let’s get started. But before you do, take a few minutes to set yourself up for success. Your queries to your preferred GenAI tool will be made much more powerful the more context you can give AI. If you can set up a space to put key documents for the AI to use as part of its context, you'll get much more out of your prompts/queries. Two tools that are useful for this are Google’s NotebookLM or Anthropic’s Claude Projects. Both tools allow you to provide a corpus of documents of various types to use as the basis for your prompts. You’ll still be leveraging the full power of the respective models, but it’ll be focused on what you’ve put into the context and considering that information primarily.
If you don’t want to set up a project or more dedicated space, just keep your files handy and include them when you prompt in ChatGPT or your preferred tool as attachments. This makes it more difficult to return to or use for different questions/chats, but it will still work for focused and specific tasks within chat threads.
Gather up some core documents for your organization. Here are some ideas of what could be useful:
Mission, vision, and values statements
Impact reports from the last few years
Stakeholder communications, such as donor engagement letters
Grant applications
Theories of change or logic models
Only use what you're comfortable with, and be sure you understand the privacy limitations and guarantees of your chosen tool (this is one reason Valerie is a paying subscriber to Claude and runs NotebookLM on her enterprise workspace). If you’re unsure, stick only to what’s public-facing, or what you wouldn’t mind if it were more widely available. You can also do a quick ‘find and replace’ to de-identify documents as well, removing your organization’s name or other details.
Now, you’re ready to experiment. Remember, experimentation with GenAI involves starting simple, being willing to adopt a curious mindset, and playing with the tools to ask things in different ways.
Four Strategic Applications of AI for Mission-Driven Organizations
1. Values-Aligned Communication
Right now, communication is key (isn't it always?). Being able to communicate effectively to your staff, partners, donors, and constituents will go a long way toward building clarity amidst chaos and easing tensions. Connecting your communication efforts to your core values will also help foster alignment (for both you and those you’re communicating to).
Try this prompt: "Help me draft a communication to our community partners about program changes that emphasizes our continued commitment to our mission while being transparent about our challenges." (include some of the changes that are being made)
Pro tip: If you're having trouble figuring out what to say, ask it to first "act as an interviewer who represents [the board, the community, our constituents, etc.] and ask me a series of questions to help us generate this communication." You can type your responses or use voice-to-text to speak your responses back. If you’re a ChatGPT user, you can try voice mode for this.
2. Adapting Grant Materials
While the amount of effort funders require for grant proposals is often frustrating (infuriating?), AI can provide a way to at least put all that effort you put into the proposals to work for you and your organization in this moment. If you've written a proposal or ten, you've already articulated your organization's history, theory of change (whether implicit or explicit), effectiveness, goals, mission, etc. So, as you seek to pivot to other funding streams and engage different audiences that you might not have appealed to before, you can use AI to help transform what you’ve got into something that might resonate more with the new audience.
Try this prompt: "Using our previous grant proposal for [program name] as context, help me identify opportunities to adapt our approach for [new funder's] priorities while maintaining our core mission and impact goals. Then, suggest specific sections we could expand or modify to align with their funding guidelines." If you have a new RFP, add it. If not, consider adding information about the funder’s mission/values/funding areas to the input.
If you need to specifically audit materials, proposals, surveys, or other documents for language around diversity, equity, and inclusion that the current administration is weaponizing, there are already tools for that. You can certainly customize a prompt within your own project. The folks at Cause Writer have already created a DEI language identifier that you can use on any copy you paste in.
3. Proactive Organizational Memory
Organizations of every size, nonprofit or not, often struggle with succession planning, onboarding, and organizational continuity of institutional knowledge. The challenges of the current moment only exacerbate how difficult this work is AND how important it is, especially for small nonprofits where deep organizational knowledge is often held within a few roles.
One thing we know is that systematic documentation and regular reflection practices can be incredibly helpful in building that organizational knowledge corpus. AI is well positioned to help you document and help you turn what might be stream-of-consciousness ramblings or 1:1 meetings into more fleshed-out organizational memory documents that can help maintain continuity, even as your staff might shift amidst all of this chaos. And, if you're an organization that already knows you'll be losing staff, and your goal is to persist in your mission and weather this storm, there's a way that AI can help you capture institutional knowledge from departing staff.
NotebookLM, in particular, can serve as an excellent repository for information as you collect it. If you’re a Google Workspace user, you have even more room to add documents (up to at least 300 at the time of this writing). If you’re a Claude user, you might create a project specific to this where you can place documents and responses to the prompt below.
Try this prompt: "Help me create a structured template for capturing essential program knowledge that includes key stakeholder relationships, critical decision points, recurring challenges and solutions, and institutional context. Then, review this [meeting transcript/recorded conversation] and organize the insights using this template." This prompt assumes you have recorded a meeting where you’ve had this conversation with a peer. A lot of times, it can be easier to talk this through. Use the AI to help you have some consistent documentation and stronger notes so that you can focus on the conversation.
Whether you’re nonprofit is facing staffing changes or not, proactively going through this exercise across roles and leader levels is also just a good learning practice. These are often the types of things we don’t regularly review, but that would be helpful to reflect on from time to time so that we identify gaps, inefficiencies, and opportunities. However, having this information will also help new hires onboard their roles much more quickly. So, it can save time and preserve so much of the work and wisdom you’ve already built up.
4. Impact Storytelling
Generative AI is already an incredibly useful tool for storytelling. Yes, you absolutely need to double-check its interpretation of any data that you might offer it (especially if it’s interpreting Likert scales, FYI!), but if you're looking to think about creative ways to frame your stories, powerful metaphors of impact, or ways of adapting your stories for different audiences, it's a tool that can be extremely useful for that work. Right now, it can be really challenging to get those creative energies flowing because — [gestures broadly] — so use AI to help you avoid the blank page and spark some ideas.
Try this prompt: "Based on these program survey results and impact data, help me craft a compelling narrative that highlights our key outcomes for [specific audience]. Use concrete examples and metaphors that will resonate with [audience type] while maintaining accuracy. Consider different formats for sharing this story across various platforms."
Pro-tip: We love to ask it to give us 5 ideas for metaphors. Or ten! Or 15! Seriously, AI doesn’t mind or get too annoyed, so go for it. Worst case, you’ll get some reptition. You can also tell it not to repeat, or to “think outside the box” or to challenge you with metaphors you might not even have considered. There are many ways to use it to stretch your thinking, not just affirm it, which is often a useful exercise. Usually, thinking through the options it presents and determining what I do or don’t like about them helps me sharpen my thinking. Win-win.
Ethical Guardrails for AI Implementation
Depending on the ways you’re using AI, the types of data you are inputting into the platforms, and the outputs you’re producing, you’ll want to consider the following:
1. Data sensitivity: What types of data are appropriate to share with AI tools, and what should remain private? Ideally, your organization has created guidelines about what information (particularly about vulnerable populations) should never be uploaded. If not, this is a great time to pause, reflect, understand the risks, err on the side of not using or deidentifying, and use this reflection to inform org policy.
2. Consent and transparency: Have your stakeholders been informed about how you're using AI? Consider developing a simple statement explaining your organization's approach to AI tools. It’s ok if you’re just experimenting right now. A culture of experimentation depends on sharing, learning, and transparency. You can start with just yourself — what’s your personal statement of AI use? How would you talk to a colleague or stakeholder about what you’re doing and learning?
3. Verification process: What review process will you implement for AI-generated content before it's shared externally? Remember that AI can produce convincing but incorrect information. It is wonderful to get things 80% correct. How will you ensure there is both a “human in the loop” as well as a process for editing and verifying?
Taking time to address these questions will help ensure your use of AI aligns with your mission and values. These aren’t all the questions you should consider, but it’s a good place to start.
Starting Small: Your First Step
No pithy phrase about this being a marathon and not a sprint for the social sector feels ok to say right now. We all know the challenges ahead are vast, and it’s really just one step/breath at a time right now. So, choose one area where your organization is feeling the most pressure right now. Start there. Start with a single experiment using one of these prompts. Keep notes on what works, what doesn't, and what surprises you about using AI tools, and consider sharing your experiences with others who are experimenting with these tools.
What's your next step? Share in the comments how you're thinking about using AI to support your mission-driven work during this cacophony of atrocities being thrown at our sector.
A huge thank you to my guest co-author, Michelle, whose comments on a much earlier version of this post and subsequent collaboration helped evolve it to the version above.
Guest Co-Author: Michelle Schneider, PhD
Michelle Schneider serves as the Director of Center for Creative Leadership’s Societal Impact Insights group. With over 20 years of experience in nonprofit leadership and management, she brings both deep personal experience and practical expertise to her work. Michelle recently completed her PhD in Leadership Studies with a focus on Nonprofit and Philanthropic Leadership at the University of San Diego. As a certified leadership coach, she maintains a coaching practice focused on supporting nonprofit women leaders. She is active in the Association for Research on Nonprofit Organizations and Voluntary Action (ARNOVA), Association of Certified Nonprofit Professionals (CNP), and the Fieldstone Leadership Network San Diego (FLNSD).
Valerie and Michelle, after 30+ years in the classroom, I've seen a lot of tech trends come and go, but your article on AI for mission-driven organizations really caught my attention!
What struck me was how your approach mirrors what we need in education right now. The pressure to "do more with less" is the story of my teaching career, especially these past few years.
Your example about using AI to document institutional knowledge hit home. I've watched decades of teacher wisdom walk out the door with retirements, taking irreplaceable strategies and community connections with them. The idea of capturing this expertise in a structured way makes so much sense.
I'm planning to try your prompt for "values-aligned communication" for my next parent newsletter - explaining our new literacy program while being honest about implementation challenges. After years of spending weekends drafting and redrafting these messages, I could use the help!
As someone who's skeptical of tech "silver bullets," I appreciated your honest acknowledgment of AI concerns while still offering practical starting points. I'm sharing this with my department tomorrow - we need this balanced perspective in education too.
Thank you for a resource that respects both innovation and caution!