Beyond Pro-AI and Anti-AI: Talking About AI Without Choosing Sides
Nine questions you’ll hear this holiday season—and how to answer honestly
AITA for Using AI?
A few weeks ago, someone pulled me aside after I’d mentioned using AI in my consulting work. With voice low, they asked: “Can I do this? Is it okay to use this?” The vulnerability in that question has stayed with me. Here was someone (a total badass, brilliant, community-engaged organizer by the way) seeking permission to use a tool and worried they’d be overheard talking about it. Because my feeds are all AI all of the time, I was a little perplexed.
But then I had my own ‘keep quiet’ moment at book club and I realized that my professional and personal lives exist in completely separate universes when it comes to AI. Here and on LinkedIn, I talk openly about AI strategies and how I use it in my work. My business is entirely centered on helping consultants and organizations implement AI tools thoughtfully. But at book club? I found myself staying quiet, sensing the potential judgment, and ultimately unsure of how my work would land.
The gap between those two worlds has been something I’ve noticed more and more lately. And I find it interesting, not because in that moment I felt ashamed of my work, but because I suddenly saw how deeply divided we are about this technology and how so much of that depends on the discourse that surrounds it and the extent to which we’re using it for things other than viral videos or silly songs.
What concerns me the most is that it feels more and more like there’s a deepening divide and people are sorting (or feeling pressured to sort) into two clusters: AI enthusiasts who see no problems with the tech and are annoyed by those who resist and AI rejectors who see nothing but harm and cannot fathom anyone ever touching the tool. But that binary honestly misses most people I know who are actually using AI (worth differentiating because a lot of people have opinions about what it can and can’t do without actually having tried it).
Most of the folks I know who are engaging with and even leading AI in their organizations are not just cheerleaders. They’re wrestling with hard questions about privacy and security, environmental impact, labor displacement, and stolen training data. They’re using it cautiously, skeptically, curiously. The more I engage with AI and with folks using it intentionally, the more I’m convinced that’s exactly how we should be engaging with AI. But gosh, that is a really hard thing to do in such a divided, reactionary milieu.
Enter, the holidays…
And now, the holidays are coming. You’re probably going to spend time with family and friends, and AI will almost certainly come up. You might be the person who uses it and braces for the eye-rolls and interrogation. Or, you might be the person who can’t understand why anyone would touch this technology. Either way, these conversations are probably going to happen.
Instead of silencing others or feeling silenced with your concerns, I think we have an opportunity to lean in and build our complexity-thinking muscles. This post/guide is my plea for a nuanced, third space and recognition that AI use and experimentation doesn’t require you to be all-in or all-out.
What if we actually went deeper in our conversations and moved past the headlines? What if we could acknowledge real harms AND explore genuine benefits? What if “I use AI and I have serious concerns about it” became a coherent, respectable position that more of us shared (or admitted to)?
To me, that’s what engaged skepticism looks like. Rather than completely disengaging and scoffing at the mere idea, engaged skeptics are experimenting, seeking to understand the costs and benefit for themselves and their communities, and determining if and how they can use this tool. So, if that sounds like your approach, this post is for you. 🎁
Below are nine questions or concerns you’re likely to hear at holiday gatherings (or maybe you have them yourself). These are based on many of the questions I receive in conversations, workshops, and forums. I offer up my ideas for each question and each response follows the same pattern: Yes, this concern is real AND here’s the complexity worth considering. I’ve also intentionally highlighted something I’ve read this year from fellow creators/thought leaders that has spurred and deepened my thinking on this issue. Note that this is my current thinking at this moment and I’m sharing not to convince you of any one position, but for you to see what resonates with you, so that you can use these as conversation starters and ways to open up (versus shut down) dialogue.

9 Questions/Reactions You Might Encounter Over the Holidays
“Isn’t AI terrible for the environment?”
Probably. There’s no doubt that AI training and deployment has real environmental costs, as with many of our modern conveniences and technology we use daily. Data centers consume massive amounts of energy and water, those impacts are measurable and growing, and the effects of those harms are often inequitable.
Check out these posts on environmental analysis that I've found useful. I particularly appreciate how Andy Masley publishes posts correcting errors and how he engages in the comments with sincerity.
But, I’ve also learned that context matters enormously for how we think about trade-offs, and this is especially true around issues of climate and environment advocacy. When we talk about AI’s environmental impact, it’s important to put it in relationship to other things we do. How does running a text-based AI query compare to streaming Netflix for an hour? How does training a model compare to the energy used in agriculture or maintaining golf courses? I don’t say this to deflect or minimize, but to position AI within the broader calculus we’re all already making about our environmental footprints. Because I worry that we’re pinning all our ire on individual AI use and missing the forest for the trees — the work we need to be doing at a local policy level and the very real benefit this technology offers a number of people for its relative costs.
Every day, we make decisions that have environmental costs (whether we consider those costs or not): flying to see family or attend conferences, running our air conditioning, eating hamburgers, driving instead of walking. We weigh those costs against benefits and make choices we can live with. We can and should do the same with AI. Are you using it to generate endless variations of a silly image, or are you using it to analyze climate data that could inform better policy? The environmental cost might be similar, but the benefit calculation is wildly different.
My engaged skeptic position: I use AI, and I think hard about whether each use justifies its cost. Sometimes the answer is no and sometimes the answer is yes. I try to minimize high-cost activities like video generation in favor of text queries that have a lower impact. The key is staying conscious of the trade-off rather than pretending it doesn’t exist, and figuring out how you want to advocate at the policy level as a voter and constituent and at the consumer level as a person with buying power.
“Aren’t AI data centers destroying our communities?”
Yes, data centers can be harmful to local communities and often don’t deliver the economic benefits they promise. The impact on local electrical grids and water supplies can be significant, driving up costs for residents while the benefits flow elsewhere.
If you want to learn more about how to add nuance and complexity to the discourse around AI, I highly recommend Natalia Cote-Munoz and her publication Artificial Inquiry. This post, in particular, is gold!
But, it’s more complex than that: data centers are actually incredibly efficient at scale. If you compare the energy and water use of a centralized data center to everyone running their own processors and servers, the data center wins on aggregate efficiency. The problem is that this efficiency comes at the cost of concentrated local impact. The community hosting the data center bears an outsized burden while everyone else benefits from the distributed service. (And everyone else may not be using it in ways that consider the costs/benefits or actually benefit anyone.)
Ultimately, we’ve seen this film before. AI is fundamentally an extractive technology, and corporations will be opportunistic about where and how they build unless we push back. But there are real examples of communities leading the way. In the Great Lakes region, local governments and community councils have been actively engaging with tech companies to set standards for how data centers are built and operated. They’re negotiating for genuine economic benefits, environmental protections, and accountability measures.
This post by Chantal Forster summarizes her predictions for “data centers coming clean” in 2026 and provides a number of useful links to current news stories.
My engaged skeptic position: Data centers are a problem that requires active civic engagement, not passive acceptance and maybe not even outright rejection. If you’re concerned about this, the answer is to pay attention to what’s happening in your community and advocate for responsible development. Support local representatives who are willing to hold tech companies accountable. This is a political and organizing challenge, not just a technology problem.
“Isn’t AI built on stolen work?”
Yes. This one’s tough and currently being litigated in courts to determine what counts as copyright infringement versus fair use. Many, including me, have had our work scraped and used in training data without permission or compensation. I had to sign over copyright to even publish my academic articles, and now they’re in these models.
And, because we live in a capitalist society, we’ve ascribed monetary value to intellectual property in a way that makes this feel like theft. We need to earn money to survive and participate in our economy, so when our creative or intellectual work gets used without compensation, it’s a real material harm. But intellectual property theft isn’t particularly new. Ideas have been borrowed, appropriated, and used without attribution throughout history. But AI compounds the speed, scale, and profiting off of these ideas while limiting control and obscuring the process in ways we haven’t seen before.
If you want a 🤯 perspective on this, be sure to check out Christian Ortiz’s piece:
However, we are starting to see emerging models for doing this differently. Researchers and developers are working to build AI systems in more ethical, transparent, and sustainable ways. Others are experimenting with data cooperatives where creators have agency over how their work/data is used. These aren’t just abstract ideas. They’re real alternatives that prove we don’t have to accept the current extractive model. That if we value what AI offers us, we can (and should) create options that respect intellectual property.
My engaged skeptic position: I’m troubled by how current models were trained, AND I’m choosing to engage with tools that are more transparent about their training data and exploring compensation models. I pay attention to which companies are trying to do this better and which ones aren’t. I believe that opting out entirely doesn’t help shape better alternatives, but being an informed, critical user does.
“AI is only being used to cheat.”
Yes, people use AI to cheat. They do this in schools, at work, and wherever there’s an incentive to cut corners. That’s real and problematic.
But, the “it’s only for cheating” narrative does real harm to people who aren’t cheating at all. It alienates users for whom AI offers genuine accessibility and agency they haven’t had before. Especially in my neurodivergent circles, I regularly hear of people who struggle to organize their thoughts, who need help expressing themselves clearly, who have language barriers or learning differences. For them, AI isn’t about “gaming” the system; it’s about finally being able to participate fully or with less stress/exhaustion/burnout.
Honestly, there’s also a privilege question here that makes me uncomfortable. When we paint all AI use as cheating, we’re often centering the concerns of people who’ve never needed these kinds of supports. I follow a handful of professors experimenting with AI in higher education and several have pointed out that if an assignment is easily “cheatable” by AI, then maybe it wasn’t a good learning task to begin with. How should/could our assignments and learning experiences be so valuable that they are ‘cheat proof’? Thats’ not meant to excuse actual cheating, but it does suggest we need more nuanced thinking about what counts as authentic work and perhaps a reflection on our current system, pedagogy, and expectations.
To some degree we’ve always known our environments aren’t suited to learning. So we’ve played with hybrid or ‘flipped’ classrooms and ‘unconferences’. This post by Jason Gulya looks at the ‘transactional’ model of education and why AI is so tempting:
My engaged skeptic position: Obviously, using AI to pass off someone else’s thinking as your own is wrong. Obviously generating endless slop serves no one. But I worry about a framework that treats all AI assistance as suspect, because it ignores the real ways these tools are helping people who’ve been struggling to participate and/or have their voices heard. Rather than simply asking “are you using AI?” I think the more interesting question is “are you thinking, learning, and being honest about your process?”
“AI is deskilling us.”
Yes. More research is emerging about this, and it’s a legitimate concern. When we outsource cognitive tasks, we risk losing the skills and knowledge that come from doing that work ourselves.
But what I’ve learned over the last few years of near constant experimentation with a variety of tools is this: awareness matters enormously. Build in opportunities for creating awareness around your AI use. Ask yourself: what thinking am I outsourcing and why? Am I crunched for time? Do I need help organizing? Or am I just defaulting to AI because it’s easier? I try to do 20-30% of initial thinking myself before turning to AI. I find this is enough to stay grounded in the work, to know what I’m looking for, to recognize when the output is off, and to feel like what I end up producing is very seeded in my own perspective.
This piece by The Human Playbook is a great provocation on skills and human value in the age of AI:
I also think there’s a crucial difference between people who’ve already built expertise and people who are still building it. For experienced professionals, AI might be more augmentation than replacement. We have the foundation to evaluate and refine what it produces. But for people earlier in their careers, letting AI do too much might actually prevent them from developing critical skills in the first place. That’s not deskilling; that’s never building the skills at all.
My engaged skeptic position: I pay attention to whether I feel rusty in areas where I used to be sharp. When I notice that, I pull back and do more of that work manually. I’ve also found myself doing more things with my hands and brain in my ‘free’ time—crocheting, knitting, reading physical books written by humans. These feel like necessary antidotes to the digital thinking and AI overwhelm. They are ways for me to stay grounded in the physical world and in slower, more embodied forms of knowledge. Basically, if you’re going to use AI, build in practices that keep you sharp in other ways. 🧶
“AI is stealing/going to steal our jobs.”
Yes, AI will displace jobs. That’s already happening and it will continue.
But what frustrates me about this framing the most is this: AI isn’t stealing jobs. People making decisions about ROI and bottom lines are making choices about jobs. This is a human decision, not an inevitable technological outcome. Companies could choose to use AI to augment workers and invest in reskilling programs. They could prioritize people alongside profits. Some are, but many aren’t. But that’s a human choice, one that can be incentivized and influenced, not a law of nature.
The work of the AI Now Institute does a fantastic job of pointing out how the current discourses around AI ultimately remove our agency. I saw the Co-Director speak at the GEO conference and appreciated her call for discussions, advocacy, and tools that don’t predetermine that AI will be something that happens to us.
Every major technological advancement has caused waves of job displacement and transformation. I know that’s cold comfort if you’re the one being displaced, and I’m not trying to minimize that pain. I’m a consultant, after all, and a prime target for AI replacement. But we go astray when we forget that humans are making these choices. The more interesting question to me is “how will companies and policymakers respond to this displacement?”
My engaged skeptic position: This is fundamentally a labor and policy problem, not a technology problem. If you’re worried about job displacement, the answer is organizing—unionizing, advocating for worker protections, pushing for reskilling investments, supporting candidates who prioritize labor rights. The more you actually use AI, the more you see its real limitations and understand where humans remain essential. There will be jobs eliminated, jobs transformed, and jobs created. I think it’s our job to fight for systems that protect and support workers through that transition, not to pretend we can stop the transition itself.
“AI just hallucinates everything.”
I mean, it certainly can. AI models generate confident-sounding user-pleasing nonsense with alarming regularity. They fabricate citations, invent statistics, and create plausible-but-wrong information all the time.
But I’ve noticed that the more you use AI, the better you get at spotting hallucinations and understanding how to prompt effectively and limit them. You develop instincts for when something feels off. You also get better at being more specific, providing context, using tools that ground responses in actual sources (NotebookLM for the win!). The improvement in models over the past year, particularly their ability to handle context and use retrieval tools, has significantly reduced (though not eliminated) the hallucination problem. And now, if citations are critically important for your work, there are many more purpose-built options designed specifically for this task.
There are also practical strategies: using projects and notebooks that pull from specific sources, fact-checking anything important, never trusting statistics or citations without verification. The hallucination problem will likely never completely go away, which is exactly why humans remain essential in the loop. You can’t just accept AI output uncritically—but honestly, you shouldn’t accept any source uncritically.
I love this exploration by Michael Spencer and Alex McFarland on how to use NotebookLM (one of my favorite tools for reducing hallucinations):
My engaged skeptic position: I use AI knowing it will sometimes lie to me confidently. That’s why I verify, cross-check, and stay skeptical of anything that seems too convenient. I’ve learned which types of tasks are more prone to hallucination and I’m extra careful there. But I’ve also gotten really good at grounding my queries in specific sources, structuring prompts and instructions so that over-interpretation is limited, and using multiple tools to cross-check one another. The key is treating AI as a collaborator who sometimes gets things wrong, not as an oracle who always gets things right.
“What’s the point if AI does all the thinking for you?”
I don’t have AI do all my thinking. That’s not how most people I know actually use these tools.
I use AI to volley ideas back and forth, to argue with myself, to refine half-formed thoughts into something clearer. It’s thinking WITH me, not FOR me. That distinction is crucial. In the instances where I’ve let AI do too much of the thinking, I’ve almost always regretted it and had to backtrack. The output feels flat, generic, disconnected from what I actually meant to say.
If you want an amazing follow, check out Baratunde Thurston and his Life With Machines podcast and newsletter. I appreciate how willing Baratunde is to explore nuance from a place of curiosity instead of judgment. His posts are 🥇.
The people I talk to who use AI thoughtfully aren’t outsourcing their thinking either. They’re using it to think better, faster, or differently. It’s like having a sparring partner who pushes back, offers alternative perspectives, or helps you see gaps in your logic. But you’re still the one making decisions, choosing directions, and knowing what’s right or wrong for your context.
My engaged skeptic position: If you find yourself just accepting whatever AI generates without pushing back or refining it, then, frankly, you’re using it wrong. The value of AI is using it as a tool that makes your own thinking more effective (or expansive). When the output feels too easy or too disconnected from your actual expertise and judgment, that’s your signal to step back and do more of the thinking yourself.
“Aren’t you worried about AI taking over?”
This depends on the day. Not imminently. We presumably can still turn off the power switch, right?
But more substantively: recently, I was listening to Fei-Fei Li talk about how AI is fundamentally flat. It’s language-based and still needs humans to provide context, grounding, and connection to the physical world. There are so many other forms of intelligence (spatial, emotional, situational, embodied) that aren’t part of current AI systems and would be required for any kind of “taking over” scenario. The doomsday narratives overstate what these models can actually do. Of course they’re powerful, yes. AND, they’re also deeply limited in ways that become obvious when you use them regularly.
Learn more about Fei-Fei Li’s vision for the future of AI.
Here’s what does worry me: while we’re busy fighting about whether AI is good or evil, the people building and deploying these systems are making consequential decisions without meaningful public input. We’re letting the conversation be dominated by either breathless hype or existential dread, and neither of those positions creates space for the kind of engaged, informed participation we actually need.
My engaged skeptic position: This is exactly why I think it’s important for skeptics to stay engaged rather than opting out entirely. We need good people in these conversations, understanding the technology’s real capabilities and limitations, advocating for responsible development, and pushing back against both reckless deployment and regulatory capture. I don’t think tech companies have our humanity and interests at heart. That’s precisely why we need to understand these systems well enough to fight for better alternatives. Engaged skepticism isn’t just giving in to the inevitability of AI. It’s about being informed and active in shaping our technological future.
What Engaged Skepticism Actually Enables
Most of my conversations around AI feel like a constant exercise in “yes, AND”. Maybe it’s an exercise in cognitive dissonance and mental gymnastics to justify my use. Or maybe it’s because it is really, really hard to hold complexity (especially these days). Holding “this is useful AND harmful” at the same time is a muscle we’re losing. (For my parent readers, I’m reminded of Dr. Becky’s “two things are true” statements!) Everything gets sorted into good or bad, pro or anti, and we’ve forgotten how to sit with complexity without choosing a team. AI doesn’t have to be another battlefield in the culture wars, but it feels like it’s headed that way.
Being willing to engage in nuance is especially important if you’re someone who isn’t personally threatened by AI, or if you’re in a decision-making position around implementation. It would serve everyone well to practice taking other perspectives seriously. It’s valuable to think past the superficial talking points to the real trade-offs, the actual calculus of costs and benefits, the nuanced questions about access and equity and power, and to engage our ability to imagine potential if we’re able to be more proactive than reactive and potentially ‘get this right’.
Ayana Elizabeth Johnson challenges us to act ‘as if we love the future’ and I find that so grounding and energizing as I navigate AI overwhelm.
So, back to that person who quietly asked me if they could use AI. Permission is part of what they were seeking, but goes deeper than that. Many of us can do what we want. The more interesting questions are: What do you want to do with this technology and why? How will you use it in ways that benefit you or others without becoming dependent? What will you learn and do differently? What are your watch-out points for when you might be getting too reliant? What values do you want to honor in your use, and how will you know if you’re drifting from them?
These aren’t questions with universal, objective answers. They’re questions each of us has to work through based on our own contexts, values, and responsibilities. It’s our personal AI-use calculus. And the calculus may change over time (“The only lasting truth is change” for my Octavia Butler fans). Engaged skepticism is about staying curious, critical, and honest about the trade-offs we’re making rather than obsessing over finding the ‘right’ position.
So, when you’re sitting around the dinner table with family or at a restaurant with old friends and AI inevitably comes up in conversation, maybe you don’t have to stay quiet and maybe you don’t have to convince anyone of anything. Maybe what we all need to work on is just to model that it’s possible to use something and still have serious concerns about it. That’s not fence-sitting or both-sides-ism. It’s the messy middle of where we are right now, and it’s how we stay proactive and imaginative in building the future we want.
Sharing is caring, especially when it comes to competing in the algorithms! If you liked this post, please consider sending it to someone else (and then have a conversation about it!)
💡Did you know?🌱 Helping mission-driven leaders, teams, and organizations thoughtfully and responsibly adopt AI is what I do! If you want to learn more, please reach out!
To stay up-to-date on my writings across platforms, please join my mailing list.












What this captures isn’t fence-sitting, it’s transition strain.
When a tool changes faster than norms, people mistake discomfort for morality.
“Engaged skepticism” isn’t a permanent stance. It’s what rational actors do while institutions, rules, and meanings catch up to capability.
This is a masterclass in nuance. Engaged skepticism, using AI thoughtfully while questioning its trade-offs, is exactly the conversation more people need to have. It’s not about picking sides; it’s about staying curious, critical, and responsible.
I talk about latest AI trends and insights. Do check out my Substack, I am sure you’ll find it very relevant and relatable.