Why Human-Centered AI Adoption Matters for Nonprofits

Before nonprofits can develop AI-driven solutions to help others, it’s important that they get comfortable with it themselves. Here’s how one nonprofit did it.

Why Human-Centered AI Adoption Matters for Nonprofits

How one nonprofit learned to incorporate AI tools, without losing their human-centric focus.

When AI started upending the way people do work, as a global nonprofit, our team at Global School Leaders was skeptical.

While we want to do our work well and efficiently, we care about doing work in a human-centered way, valuing deeply the unique wisdom and perspectives of our team, our partners, school leaders, and students. So, we held back a little bit.

This piece explores how we began to lean into AI, while still preserving human-centeredness.

The Beginning of Our Thinking

Global School Leaders focuses on strengthening school leadership across the Global South. Our team is all remote and global.

Our first conversations about AI focused on how we could support school leaders at scale more effectively and how school leaders could feel more empowered by AI tools.

At first, we thought we just needed to learn to use it, and we would see solutions for others. With that motivation, we sought out AI experts to understand how our organization could deliberately and cautiously start using AI to address big, messy world problems.

The answer: Before we develop AI-driven solutions to help others, we have to learn to use it to help ourselves first.[1]

A Learning Model That Centers People

We considered the traditional path of “learning” like bringing in a trainer to lead a session. But we quickly realized that AI was not just a tool to learn, it shifts how we think about and do our work more fundamentally.

We built a voluntary community of practice across all our teams (not just the likely players — technology and operations) to build collective knowledge and learning about AI.

This strategy took more time, but we believed, would change our team’s practices in deeper ways and ensure that adoption of AI happened across the organization and not just in siloes.

With that deeper understanding, we could then be able to see how AI could solve important social problems.

This community of practice had weekly prompts to encourage exploration and shared learning, and we built space for discomfort and critique. The organization paid for a subscription to the AI tool of their choice in return for promising to use the tool and contribute to online conversations about their learning.

We learned that people on our team had been curious about AI but were nervous about using it because people would think they had “cheated” or taken shortcuts. So, we set a very simple policy, which told the team to:

  • Use AI
  • Be honest about how you use it
  • Do not enter sensitive information into AI
  • Make sure you can stand by your work

By surfacing the worries, we were able to set standards that allowed people to experiment more freely.

A team member affirmed this approach: “I need AI in my radar, meaning, not only do I need paid access to it for best-use cases, but also — some conversation around it.”

Through this work, we saw firsthand the issues with AI that we had read about.

We saw hallucinations. We saw how AI exhibits sycophancy. We saw deep cultural sensitivity challenges. We saw concerns that the voices of the people we care most about — those in the Global South — were not the ones building these tools. We saw AI doing things we thought we were uniquely capable of and got scared for our own jobs. We saw AI doing things we thought only humans could do, which led us to reflect deeply on what this means for the roles we value most.

One team member said, “Some of the things I’m most proud of in my career are to do with making tricky things a bit simpler. But now a machine can do that. I know it sounds silly, but I feel as though something has been taken away from me, and I’m sad about that.”  

Instead of ignoring these challenges, we surfaced them in team discussions and turned them into learning opportunities.

“These regular meetings and activities have helped me a lot to be mindful about how it can be used at its best and what its challenges are,” said another team member.

But we also saw power. We saw how AI could help us focus on our relationships rather than tasks. We saw how AI could help us practice hard conversations before we had to have them. We saw how AI could help us organize huge amounts of information and pull insights, so that we as an organization could learn from our work more effectively. As we did this, skepticism shifted to curiosity, and curiosity shifted to creativity.

About 75% of our team now uses AI regularly to enhance their work. We use it to become more efficient, to amplify human insight, to allow us to detect patterns. But we also keep our skeptical hat on and check for bias or issues in the work.

How our team is using AI today:

  • Brainstorming and Idea Generation:
    Using AI as a critical thought partner in sharpening new ideas. A team member noted, “[AI] reduces my ‘inertia’ to start writing or thinking about an idea I have.”
  • Identifying Insights and Patterns:
    Summarizing and spotting trends across years of research and data from both within the organization and across the sector. For instance, we used Notebook LM to enter all our learnings across projects to identify common challenges that we can collectively address.
  • Writing Support:
    Support in writing and proofreading emails, reports, and content, which is especially helpful for team members writing in a second language.
  • Translation:
    Being a global organization, AI has helped make our content and reports more accessible across languages. It has also made cross-language communication faster and more accurate.

Lessons for Other Nonprofits on this Path:

  1. Make space for collective learning, even if it’s messy: When learning something that is complex, a top-down training approach can be replaced with one in which the team builds collective learning together.

    This requires resources (subscriptions, time), focus (online community, someone responsible for prompting the team), and clear expectations (agreements to share the good and bad with each other).
  2. Preserve human judgment: As a nonprofit, adopting new technology can be done in alignment with our human-centered values. We learned the technology in a human-centered way, and we can use that learning to ensure that we use it as such, too.

    Build the expectation that our team should still be able to stand by anything we produce. “AI generated it” is not an excuse.
  3. Treat skepticism as an asset. Skepticism is powerful. The fact that our team was skeptical about AI means that we double-check everything that AI gives us, we try new “AI-powered” apps out with great care, and we ensure that equity remains at the core.

    We especially work to make sure that our solutions, which are AI-driven, do not exacerbate fundamental equity issues in the world — but we work to solve them. Make that part of your checklist.
  4. Don’t wait for a long, detailed policy to start experimenting. Just start experimenting. Our policy was simple and gave people the safety and freedom to experiment.

We believe AI isn’t so much about speed in nonprofits, but about how it can strengthen the humans at the heart of our work.

Organizationally, we are in a creative exploration phase, where we can experiment, learn, and navigate AI use in ways that strengthen our mission and values.

We know the risks and the challenges, we understand that we need to bring our voices and experiences to anything AI gives us, and we are not afraid to experiment and see if AI can help us help others better.

Some examples of weekly prompts that we used internally:

  • Is there anything that has been difficult or that you need to learn more about?
  • What strategies are you using for building a habit of remembering to use your AI tool? What is motivating and demotivating about using AI?
  • In that time since you’ve started using AI, what are some things that have changed for you? Have you experienced any shifts in your mindset about AI? Have you gained new skills? Have you lost or gained some “muscle memory” because of using AI?

This author wishes to acknowledge that GSL team members Grace McManus, Tejas Airodi, Swarna Surya, and Esmail Bagasrawala contributed to this article.


[1] With huge thanks to Rick Leimsider, the Director of the AI for Nonprofits Sprint and the Entrepreneur-in-Residence at the Foundation for the City of New York, for initial thought partnership.

You might also like:

 

You made it to the end! Please share this article!

Let’s help other nonprofit leaders succeed! Consider sharing this article with your friends and colleagues via email or social media.

About the Author

Avni Gupta-Kagan profile pic

Avni Gupta-Kagan has over 25 years of experience working to improve education outcomes for children on a range of issues, including human capital management, leadership development, strategic planning, and K-12 curriculum.

Avni believes deeply in the importance of building strong teams where adults can thrive in their work on behalf of children - whether in schools, in central offices, or in NGOs supporting schools. She works closely with organizations, school districts, and schools to do so.

At Global School Leaders, Avni has had the pleasure of helping to build an international, remote high-functioning team that supports school leader development across the global south.

Articles on Blue Avocado do not provide legal representation or legal advice and should not be used as a substitute for advice or legal counsel. Blue Avocado provides space for the nonprofit sector to express new ideas. The opinions and views expressed in this article are solely those of the authors. They do not purport to reflect or imply the opinions or views of Blue Avocado, its publisher, or affiliated organizations. Blue Avocado, its publisher, and affiliated organizations are not liable for website visitors’ use of the content on Blue Avocado nor for visitors’ decisions about using the Blue Avocado website.

Leave a Reply

Please be respectful. Comments that violate our Comments Policy will be removed.

Your email address will not be published. Required fields are marked *