More Gardening, Less Controlling: AI, Leadership, and Power Dynamics
How Network Leadership and Experimentation Culture Drive Successful AI Adoption
AI adoption is wreaking havoc on org charts. Or, maybe it’s the org charts getting in the way of successful AI adoption? Either way, it’s time to take a good hard look at the power dynamics in your organization and accept that implementing and leveraging AI for your mission-driven goals might require some new approaches.
I hear a lot of folks liken AI to ‘the internet’ as a way of helping people understand the scale of disruption/transformation and the need to pay attention. And, while I think it’s true that it is similar in terms of how much this innovation feels like it will influence our lives, there’s also something unique about AI that leaders need to understand: AI shows us the power dynamics that haven’t been working in our organizations. Like, for probably a realllllly long time. As such, it requires different leadership skills to manage because it’s not flowing in the traditional top-down way. We’re gonna need a bigger (leader)ship.*
Previous tech innovation followed money and hierarchy down predictable paths. New tech like computers were expensive, so it took a while for a majority of workplaces and then homes to have this technology (I heard recently that it was approximately 20-30 years or so). But right now, if you have a smartphone, you have AI. That means the intern and the C-suite exec have access to the same technology right now. And my guess, is they are using it in very different ways (and could probably learn a thing or two from one another).
And whereas previous tech has required fairly linear skill-building (i.e. spreadsheet basics, advanced spreadsheet formulas, etc.), AI use thrives on experimentation. It’s a lot more ‘throwing spaghetti at the wall’ than we might care to admit. But it’s also evolving so fast that prescriptive, linear approaches to implementation are destined to fall behind or become irrelevant before they get too far along.
One thing I’ve said over and over, and heard from others too, is that AI isn’t a tech adoption challenge — it’s a leadership challenge. How we’re handling AI as leaders and organizations says a lot more about how we lead, the psychologically safe workplaces we’ve built, and our ability to foster learning cultures, than it does about our tech capabilities. (However, the way we’ve led our tech and operations up to this point is making or breaking some attempts to use AI because wow, when that data and infrastructure is a mess, it sure takes a while to clean up!)
So, how can leaders navigate this moment? And, how can we use this disruption as an opportunity to upend some of the hierarchical leadership mindsets and behaviors that are holding us back?
Sidenote: I realize that not all organizations need or want to engage with AI. If that’s you then you’re probably not a regular reader of this newsletter. For this piece, I’m really thinking about the organizations who have a mismatch between how their leadership is leading on AI and what their employees are wanting, feeling, and fearing.
Hierarchies aren’t helping
AI shattered the typical tech adoption timeline. In less than three years, ChatGPT went from launching to being in nearly everyone’s pocket. (Or, at least, everyone with a smartphone can have it in their pocket.)
Access to this type of technology, at this scale and speed, is structurally disruptive. Previous tech was so expensive that it was negotiated and navigated by procurement departments and this naturally slowed things down.
But AI? We’ve all got access to the same tools regardless of our position. The nonprofit program coordinator can experiment with the same tools as a Fortune 500 executive. This democratization of access breaks the traditional "roll out from the top" model that organizations have relied on for decades. In fact, I think if you’re approaching AI implementation from a top-down perspective, you’re already hindering yourself at the outset.
This is not to say that a more bottom-up approach is going to be more effective — there’s still an important role for positional leaders to play, it’s just different than what we’re used to as we navigate change management. Instead, AI implementation can benefit from a networked approach — one that eschews hierarchy and org charts in favor of energy, experimentation, momentum, and perspective.
Now, network effects matter more than reporting lines or, gasp, departments. Yes, that means that siloing your AI adoption within the IT department might not be the best strategy. No shade to IT departments, as they are clearly part of the picture. But this type of adoption and implementation, especially in human-centered ways, is going to require a strategic and intentional cross-section of your organization that nicely sets aside the existing structures.
The person who discovers a brilliant use case might be anyone in the organization. They’ll discover it because they were willing to be curious, experiment, and play (hopefully within the bounds of an organizational policy and approved tool (that is actually decent) that you’ve set up!). They’ll also probably discover it because they are closest to the work, and therefore the challenges, that your organization needs to solve to meet its goals.
And when they discover it, they’ll (probably) tell people about it, and others will start to seek them out for ideas or to learn. Before you know it, they are influencing AI adoption at your organization and, as far as networks go, they’ve become a node with power, influence, and insight. Whether you support them, learn from them, and figure out how to build on what they’re doing will be up to your leadership mindsets and behaviors.
The power dynamics have fundamentally shifted, and our leadership structures haven't caught up. And, they probably won’t, because humans are bad at change. But AI adoption presents an opportunity to try some new approaches, even if “just” experimental.
You can’t mandate emergence
I love bringing in insights from complexity, complex adaptive systems theory, and systems change work because they always offer ways of flipping my understanding. In complex systems, for example, innovation emerges from the edges, not the center. You can't control emergence any more than a gardener can make a seed sprout by commanding it. What you can control are the conditions that enable or hinder emergence. There’s also an assumption that emergence will produce what the system needs to thrive. The goal is not to predetermine that, but to put into place the conditions (soil, nutrients, light) that will enable it.
Yet leaders keep trying to mandate. They mandate AI adoption (see those viral AI memos…). They require teams to "use AI" in their work without supporting the human-centered side of transformation — psychological safety and job security, for starters. They set up committees to develop AI policies, which might be out of touch with the actual practice guidelines and most relevant use cases. And then they wonder why adoption is slow, why people seem resistant, why the transformative benefits they read about aren't materializing, or why people are continuing to subvert the policy and use AI in ways they aren’t supposed to.
The problem is that mandates and bans remove agency and reinforce the existing power dynamics of the organization. You can mandate or prevent tools from being used, but it doesn’t mandate or prevent curiosity and experimentation. (I’m resisting the urge to rekindle my academic background in adolescent development and liken it to all the bans we put in place to attempt to control adolescent behavior, but that would really take me down a tangent!)
I've seen this play out. Employees use AI brilliantly to solve problems but never mention it in team meetings because the official AI policy was still "under development." Innovation was happening, but it was siloed and silent. The organization was learning nothing from its most creative experimenters. And, in the worst cases, those creative experimenters felt stifled in their org and it was influencing their sense of engagement (i.e., they were questioning how long they might want to stay at such an organization).
Contrast this to organizations that lead with invitations rather than bans or mandates. A leader who engages their team transparently in their own AI use and adopts one simple behavior change, such as an opening question in team meetings: "What's your AI experiment this week?" They are signaling no judgment, no requirement to succeed — just curiosity about what people were trying and an eagerness to learn. With an approach like that, within months, organic adoption can spread throughout the organization, driven by peer enthusiasm rather than top-down pressure.
And, if you’ve done the work to put into place the conditions that will support that organic spread, you can watch it grow and marvel at the bounty. It doesn’t mean there’s not a place for some fences around the garden or some intentional timing and strategy, but it does mean doing a little more sensing of the conditions, seeing what is working and what isn’t, and figuring out how you might need to adapt.
Understanding your real org chart
Traditional organizational power and structure asks: Who reports to whom? Who approves what? Who has budget authority?
Network power asks different questions: Who influences whom? Who do people turn to when they want to try something new? Who shares failures as generously as successes? Who connects different parts of the organization? Who is siloed?
When you take a step back and look at the individuals in your organizations and the connections they have with colleagues, you’d likely produce a map that looks very different from your ‘official’ org chart. In highly relational organizations, you may be familiar with this if you hear “go to so-and-so, she really knows what’s going on…” Or, you might be the person who is always getting the DMs for quick chats or advice.
These informal power networks are driven by influential network actors — let's call them "pollinators" to keep with the garden metaphor. And they rarely align with the hierarchy. They might be the program manager who's always sharing interesting articles, the person who knows the whole history of the organization and its evolution, or the manager who is consistently sought out for mentoring and advice. In the case of AI, they are the people who are trying out new things and learning — sharing what worked and what didn’t, asking others to share new tips, and inviting experimentation over perfection.

Thus, identifying and empowering these pollinators, regardless of their position on the org chart, becomes crucial for AI adoption. They're the ones who will carry new ideas across organizational boundaries, and who will make experimentation feel safe and exciting rather than risky and mandated. (To learn more about ‘innovation networks’ you can check out the work of Adnan Iftekhar and Brian Moynihan and their book, AI Culture Shift.)
Lead like you’d tend a garden
This brings us to perhaps the most fundamental shift AI demands: moving from hierarchical “factory” style leadership to “gardening” style leadership. It’s not just AI that is demanding this. Really, it’s that our modern business and societal challenges are so complex and interconnected that different leadership skills are required than the ones we’ve perfected over the last few decades. I helped write about this years ago at the Center for Creative Leadership, and I remain super proud of the ideas in that paper and recommend you check it out.
To an extent, I understand why positional leaders are struggling at this moment. The ideas we have of leaders and leadership are that they should project confidence and certainty, especially when the waters feel chaotic and unpredictable. It’s that old heroic model of leadership that is so entrenched. But, right now, even the leaders in the AI field aren’t exactly clear on what AI is and isn’t capable of. That doesn’t inspire confidence. And trying to over-engineer confidence in this moment, at least in mission-driven organizations, is probably a Sisyphean task.
“Factory” leadership assumes predictable inputs lead to standardized processes that create consistent outputs. It’s linear in that we believe that we can identify a desired outcome, map the processes needed to get us there, and then enact them in a certain order to achieve those goals. It's hierarchical because it's about definition and control— controlling quality, controlling efficiency, and controlling outcomes. Someone at the top designs the system, and everyone else executes their part. Like a well-oiled machine, it can be optimized. There’s still room for innovation, but not the kind of disruptive innovation that would potentially destroy the business or transform it into something completely new.
“Gardening” style leadership recognizes that in complex systems, you can only control conditions, not outcomes. Gardeners don't make plants grow — plants make themselves grow. Gardeners create conditions that support growth: good soil, adequate water, sufficient light, and protection from pests. Gardeners appreciate the lack of control they have in the context (i.e., weather patterns, etc.), but they trust the process and the preparation. They know they might be surprised by what grows, and that they will adapt to the changing conditions. They know that some principles exist with some certainty — such as the timing of planting — but that doesn’t stop them from watching the weather and adjusting their timeline.
Because AI is so broadly accessible while carrying true risk and changing so rapidly that best practices are quickly revised or out of date, leaders who are focused on certainty and control and who favor traditional, hierarchical change management, are not likely to realize the full potential of the technology or lead their organizations through this disruption in a way that truly grows and seeds the culture for the realities of the AI-enabled era.
This is not to say that the org chart should be cast aside. Positional leaders still have a critical role to play, it’s just fundamentally different than how we’ve approached tech and capability development in the past. Instead of being the source of innovation (or the mandate deliverer), they become the removers of barriers, the enablers of emergence. In other words, listen to people (your cross-sectional AI committee), what they need (actual decent and capable tools with a safe environment), why it’s important (trust that they know the mission, stakeholders, and needs), and give them what it will take to do it (money), and then get out of the way. Or, join in, but as a student.
What do “garden” leaders do in the context of AI adoption? If you’re a positional leader, her are some things to consider:
Prepare the soil: Build psychological safety, allocate time for experimentation, and celebrate learning from failures. They also might “test” the soil to see what they need to do to improve their conditions. If people don’t feel safe to speak up or engaged in the work, there’s some work to do.
Securing resources for experimentation: actual good ones. Set up a secure installation of one of the main frontier models and tell people clearly how they can use it, what is off-limits, and what is safe.
Foster psychological safety for trying and failing. Do this by modeling it yourself. Share the ways you’ve tried AI even if it is embarrassingly simple. People are following your lead more than you realize. Everyone is just figuring this out and I assure you that your uses are helpful to share.
Limit policy transgressions that prevent responsible experimentation by presenting viable alternatives. Banning notetakers? Invest in finding one tool that you can approve. Preventing the use of internal data in any model? Invest in de-identifying data sets to a degree that you feel safe having them used and share those as fodder for experimentation.
Connect pollinators across the organization. Find out who people are going to. When you hear “so and so showed me this cool tip!” reach out to them. Not to supervise or curtail, but to learn. Create meeting activities and spaces where these folks can come together to share their ideas and learn from one another. Even better, give them dedicated time as part of their role to explore AI and steward responsible use from others.
Celebrate learning, not just success. Straight out of Amy Edmondson’s work on psychological safety, building safety in learning isn’t about just sharing the wins, it’s also sharing the failures and the lessons learned from them. What good is a win if we don’t know how to recreate it by understanding the hits and misses that got us there?
What garden leaders don't do is equally important. They don't try to make innovation happen faster by mandating it. They don't insist all plants grow the same way or all gardens produce the same harvest. They don't panic when some experiments fail—they know that's part of a healthy ecosystem.

What your garden reveals
Another key lesson that I think leaders are missing is this: AI adoption is an organizational mirror. It reflects your existing culture, sometimes with brutal clarity. As I glance out the window into the backyard at the garden my kids so desperately wanted to plant, one thing is clear. I’m a terrible gardener. I have the core elements (seeds, soil, water (when I remember), sunlight (apparently too much)), but I’m missing some key resources. And, while I’ve learned a few things from failed gardens of past seasons, I know I’ve still got a lot of work to do. The question for you is, what do you see when you look at your (organizational) garden and what does it say about your organization’s learning culture?
If people hide their AI use, you’ve probably got more of a trust problem than an AI problem. They're afraid of judgment, of violating unclear policies (if you have them), of being seen as threatening others' jobs, or of being seen as cutting corners rather than being more efficient, effective, or innovative.
If only certain roles or senior leaders get to experiment with AI, you don't have a resource problem — you have a power problem. Who gets time for learning? Who has permission to fail? Whose ideas matter?
If fear dominates curiosity in AI discussions, you don't have a communication problem — you have a psychological safety problem. People need to know that their jobs aren't at risk, that mistakes are learning opportunities, that questions are welcomed. How you communicate this and how you follow through on your commitments to upskilling, reskilling, and celebrating the truly human parts of their work and expertise will build that safety over time.
These issues existed before AI arrived. AI just makes them impossible to ignore.
Ultimately, these challenges offer a window into what inclusion looks like in your organization. Who in your organization has the luxury of experimentation time? Who feels safe admitting they don't understand something? Whose perspectives are missing from your AI explorations? If AI adoption follows existing hierarchical lines — more accessible to those with more positional power, more time, and more psychological safety — then it will amplify rather than address organizational inequities.
Three micro-moves
Ready to start cultivating? Here are three small but powerful shifts:
1. Map Your Innovation Network
Forget the org chart for a moment. Who are your actual experimenters and connectors? Who do people go to with questions? Who shares interesting discoveries? These are your pollinators. Empower them regardless of their title. Give them time, resources, and platforms to share what they're learning. Celebrate them, the little seeds that sprout and persist, that can feed the rest of the system.
2. Create Pollinator Partnerships
Pair enthusiastic experimenters with decision-makers for regular show-and-tell sessions. Not presentations, not formal reports—just "look what I tried this week" conversations. I remember some of these conversations with senior leaders from my earliest days of experimenting with AI. The little ‘aha’ moments and the positive feedback sent me right back to experimenting and figuring out new things to try. This creates bi-directional learning: experimenters get affirmation, resources, and removal of barriers, while decision-makers get ground-truth insight into what's possible and why they should care. Even if they don’t fully understand it, they can advocate for the conditions to support these experimenters.
3. Lead with Learning Failures
Start your next meeting with: "Here's how AI didn't work for me this week..." When leaders model that failure is just data, not a disaster, it changes the entire organizational conversation. Suddenly, everyone's failures become collective learning opportunities rather than individual shortcomings. Right now, the hype cycle is in full swing. I see a lot of people outsourcing a lot of things to AI that really need a bit more oversight. I struggle with it myself, sometimes foolishly assuming I’ll be able to do something in half the time and instead realizing that I didn’t necessarily save time but I did gain some interesting perspectives and insights (still a win). Sharing failures is going to be an important part of determining what is worth investing in and what isn’t, what reasonable “ROI” will look like, and what it will take to get there. For me, I’m never going to try to grow carrots again, and that’s totally fine. That little section will get more snap peas next year.
Let the garden grow
Take a moment to reflect: Where is experimentation already happening in your organization, outside the official channels? Those underground springs of innovation are your garden trying to grow. Leaders have a choice: control or nurture.
The organizations that will thrive in the AI era won't be those with the best top-down AI strategies. They'll be those who learned to see their org charts as trellises rather than hierarchies—structures that support growth rather than control it.
What would it look like to lead your AI adoption like a gardener rather than a factory manager? Where might you need to prepare soil, remove obstacles, or simply get out of the way?
*If you ever doubted a human was writing this, well, here you go. Not sure even Claude would let me be so punny.
Hey! I wrote those em dashes. I love them.
💡Did you know?🌱 Helping mission-driven leaders, teams, and organizations thoughtfully and responsibly adopt AI is what I do! If you want to learn more, please reach out!



This is so interesting. In my experience there are some people who adopt everything and are very experimental immediately, and some who will never, ever use new ideas. Most are just casually interested or not.
Given the automation nature of AI, leadership should be thinking about how it can be used for what they consider the right places, and anywhere they would want to discourage team members from using AI. AI is neutral, it's a tool. But it's up to leadership to give good guidance on what they want from it because everyone is using it and they can't control that. I'm not a fan of AI for creative work (the emphasis of my account), but for so much it is useful.
My company is Korean and I am not. It has helped me with communication style and being understood. This is priceless to me.