Learning in the Tension Where AI Meets Practice
Reflections from GEO: What’s emerging at the intersection of AI, equity, and philanthropy
Summary: Fresh from the GEO Learning Conference, I’m reflecting on the seeds and signals that surfaced around philanthropy’s relationship to AI, both explicit and emerging. This post explores how oral reporting, narrative strategy, and power dynamics shape the discourse, and how staying open, unsettled, and grounded might help us navigate the complexity ahead. It’s a glimpse into what felt resonant, what raised questions, and where I see new possibilities beginning to take root.
I’m just back from my first experience with the Grantmakers for Effective Organizations (GEO) Learning Conference. This conference alternates with GEO’s national conference and focuses specifically on applying learning within philanthropy and nonprofits.
I wanted to learn as much as possible about how philanthropy and nonprofits engage with AI (or not). But I also wanted to learn about the current discourses and issues in philanthropy that felt like seeds or signals for how the AI conversations might take shape over the next few years.
This post won’t be a run-down of the whole event, which was so incredibly rich and inspiring, and filled with both truth-telling and calls toward excellence (versus perfection). I won’t be able to recap all the wonderful conversations I had with folks and the people I met that left my cup feeling full.
What I’m focused on here are the seeds and signals — things I heard that were explicitly related to AI or that I see as connected in some way in the future. It is just one slice of the experience.
Fixing Broken Processes: Oral/Alternative Reporting and AI
There were at least two discussions centered on oral or alternative grantee reporting (OAR). This movement within philanthropy aims to reduce the reporting burden for grantees while also leveraging the learning opportunities reporting could offer, if done more equitably. The discussions were grounded in the recognition that grantee reporting practices often require a lot of time from organization staff — time they could be using to address the social issues at the heart of their missions — and that the shared learning from such reports often does not happen in a way that supports shared learning.
Folks from the Houston Endowment presented on a pilot study that leveraged AI as part of an OAR project. The organization supported the presenters through clear and concise guidelines for AI use (I posted about those here). The presenters documented their process for using AI to support the oral reporting conversations through transcript cleaning and synthesis.
It was a super clear and effective way of showing how AI could improve grantee reporting efficiency and, most importantly, free up time for the grantee organization to focus on its important work.
My mind went in many different directions about the possibilities this could open up. For example, they mentioned that their program officer took notes from the meetings on key takeaways, and they used that for some of the verification of the AI. But I wondered what it would look like to put the human in the loop before the calls took place in terms of setting the agenda. I also wondered what it could look like to have more context for each grantee over time, through periodic check-ins or other types of reports, and use that context to inform the conversation so that it was unique to the grantee and their contributions toward the bigger goals. This could be accomplished through a ‘project’ for each grantee, where information could be accumulated to provide context and documentation for the discussions.
Zooming way out, my brain also went to the risks of such efforts and how the introduction of AI could potentially result in some risks to the grantee partners, if not done safely and securely. (This is all the more reason for organizations to have clear guidelines beyond bans and for tools like Change Agent AI to continue to gain traction.) With so much chaos and unpredictability in our sector, not all organizations can safely and securely document their activities, and by collecting such information, funders may put grantees at risk.
I also wondered about the unique promise of generative AI for this specific use case of oral reporting. Given that LLMs are developed on language, and oral reports are discussion-based alternatives to traditional narrative reporting, could LLMs be uniquely positioned to enhance the dialogue and meaning-making that arises from these conversations?

In another session on oral reporting (but not focusing on AI), presenters from the Robert Wood Johnson Foundation shared their approach to piloting a written reporting alternatives project (WRAP). As part of their process, they shared a ‘power mapping’ activity to understand all of the stakeholders around the reporting process and how they would influence the success of WRAP or stand to benefit from it. I thought this was a really interesting approach, and it made me wonder how power mapping could also serve organizational AI efforts by clarifying the lines of power and decision-making regarding the use of AI in organizations or with grantees.
Counter Narratives that Cultivate and Connect
I was so excited on Day 2 to attend back-to-back talks from FrameWorks Institute and the AI Now Institute. As a former narrative researcher feeling adrift amidst AI's polarizing narratives, I was searching for some anchors. Dr. Julie Sweetland (from Frameworks Institute) and Amba Kak (from AI Now Institute) did not disappoint.
The narrative pump was primed when Dr. Sweetland talked about how communication is one thing we can control in times of uncertainty. And when we’re experiencing such extreme polarization in our society, it makes the voices of reason that much more important. However, some strategies can serve the voices of reason in terms of their ability to offer narratives that can take hold successfully. While this presentation didn’t focus on AI, I found so much of it resonated with what I find so challenging about being on platforms like LinkedIn and Substack, where it seems like there’s either pure AI Optimists/Evangelists or AI Skeptics/Resistors.
Which is why 15 minutes later, I was blown away by Amba Kak from AI Now Institute and her excellent talk on the future of AI, which focused on, you guessed it, framing the discourse. She began her talk by noting the two camps I mentioned earlier. However, she pointed out a flaw in both discourses: they largely position society on the receiving end of these technologies. The tech is imbued with agency, and we are either doomed to be destroyed by it or saved by it, but our role in shaping it is minimal, if explored at all.
She also focused on the concentration of power within AI, which is nothing new regarding the wealth and power Big Tech has been accumulating for years. However, treating AI as something new glosses over this accumulation of power, which is especially concerning when the tools of AI are likely to further concentrate wealth and power as they disrupt labor markets and economies. This is all the more reason that we need policy, regulation, and participation in shaping this technology.
She offered up narrative devices that position AI as a panacea, such as treating these algorithms and technology as new (the math is old; what’s new is the scale of data and compute), the ‘bigger is better’ paradigm of growth and scale, AI ‘Solutionism’ as a means for solving all of our problems, and fuzzy language around the actual abilities of AI at this time.
Most importantly, she pointed out that when this tech fails (i.e., ‘hallucinates’ or misidentifies), it does so unevenly. And those who are often most affected by such failures are those already systemically marginalized in our society.
I genuinely hope she and Dr. Sweetland were able to connect for coffee because what she proposed next could be a fantastic synergy between the two organizations. Her question to us was, Is another trajectory possible? What would it take to build a positive agenda for public-centered AI innovation?
She identified several opportunities for doing this, primarily through a policy lens. One opportunity is to be sure that structurally disadvantaged groups have a deliberate seat at the table where decisions around AI are being made. She also identified California Assembly Bill 1018 as another promising step toward Big Tech’s accountability for the tools they are developing and their impact on our lives.

Seeds of Wisdom
The closing plenary featured GEO President Marcus Walton in conversation with Akilah Massey, followed by the inimitable adrienne maree brown in conversation with Chi, co-founder of the Cypress Fund.
In those incredibly rich conversations, I noted a few key takeaways that feel critical to hold onto in this moment, as the discourses around AI solidify (or…perhaps, rupture?)
We need space to explore truth as it evolves (Marcus): space to explore feels essential. What does it look like to adopt an exploratory lens? One that acknowledges change and evolution in terms of our understanding and needs related to AI? While I’ve been thinking that engagement with AI does not equal endorsement, might exploration offer a different on-ramp for mission-driven organizations and leaders?
“Remain Unsettled” (Marcus): What does it look like to engage with these tools and still remain unsettled? How can we use that restlessness to identify the alternative vision for the tech we want? One that lives into the promises of its potential? I have been feeling extremely unsettled, even as I regularly engage with a variety of AI tools and sit in many different kinds of conversations about what it looks like to have responsible, mission-aligned AI. Recently, I’ve been thinking that my task isn’t to resolve the unsettled feeling, but to use it as my intuition and guide in order to not become complacent with the technology or its ‘invetiability’. His words grounded me in that feeling.
“Every place is a changing place” (adrienne) — what is changing about our places and what is staying true in times of rapid chaos and disruption? Where are the opportunities to shift toward the versions we want amidst the change? If everything is changing, what does that mean for our opportunities for influencing that change?
“Making power dynamics transparent helps us move from transactional to relational” (adrienne)—this connects to my earlier wondering about power mapping around organizational AI adoption and to Amba’s presentation about making the power structures in AI/Big Tech visible. As we determine our relationship with this technology and how this technology will impact our human-to-human relationships, how can we understand the power relationships hidden behind it all?
“Instead of looking for rock stars, look for earthworms” (adrienne): I spend too much time on LinkedIn, and a glut of “rock stars” pushes the discourse in different directions. It can be difficult not to get caught up in the game. In reality, I want to be an earthworm. I want to dig, churn through the earth's richness, and move steadfastly to help create a nurtured and fertile ground for what might grow. Rock stars won’t save us, but earthworms could allow us to develop something sustainable.
Overall, the conference felt like a great experience. Of course, I wanted more talk on AI, but I know that conference planning takes a long time and AI is changing so quickly, that it can be tricky to plan ahead. But, even with just a few sessions focused on it, conversations about AI were certainly in the air. I suspect that next year it will be a main theme and I’m excited to hear how our thinking has evolved. But, more importantly, I’m looking forward to learning how philanthropy is leaning in to influencing the shape of AI so that it truly serves the broader population, not just a wealthy few.