Introduction
Open AI’s ChatGPT is the all the rage in the nonprofit sector, especially when it comes to donor engagement. There are numerous articles about how ChatGPT and artificial intelligence can do things that can contribute to better donor engagement.
And they aren’t wrong: nonprofits can finally generate donor communications quickly and efficiently with these rather new tools. Goodbye, response delays! Hello, quick replies (and personalized emails)!
But wait: How does ChatGPT work?
ChatGPT is an artificial intelligence (AI) tool that processes and understands language. It is trained on very large datasets across the internet, including books, articles and websites. So, it learns from this dataset as opposed to learning from experience like humans do. ChatGPT can complete a number of tasks, including:
- Answer questions
- Generate human-like text
- Proofread and edit content
In order to complete these tasks, it needs a prompt. That’s where people come in.
There are limitations to this tool, namely that:
- it can produce incorrect information
- it doesn’t have emotional intelligence (more on this later)
So, how does this AI-powered tool help nonprofits?
DonorPerfect, “a one-stop fundraising software solution” for nonprofits, has an article exploring if nonprofits should use ChatGPT, “written” by ChatGPT. It includes a section about saving employee writing time:
Keela, a company that provides fundraising customer relationship management (CRM) tools to nonprofits, has a listicle about raising funds with AI tools, including ChatGPT:
Donorbox, a company that provides donation systems to nonprofits, has a similar article about how AI is changing the nonprofit sector and how nonprofits can use ChatGPT in their favour:
To recap, ChatGPT can do incredible things for nonprofit donor engagement, including:
- Generate content ideas and topics
- Personalize the content
- Proofread and edit content, including letters and emails
- Crafting content for you with the click of a button
And if it wasn’t already clear, ChatGPT can do all of this for anyone, anywhere, for free. Seemingly, it is the perfect tool for nonprofits, especially those with tight budgets, not a lot of resources and short time.
So, what am I getting at?
You cannot improve your relationship with donors using ChatGPT without fundamentally changing it.
And therein lies the myth that ChatGPT can improve nonprofit donor engagement.
Firstly, let me be clear: I am not refuting any of the aforementioned articles. Also, I’m not saying they are wrong: They each include helpful tips and allusions or explicit statements about the limitations of ChatGPT.
What I am saying is that ChatGPT’s capacity to improve nonprofit donor engagement doesn’t tell the full story.
Myth: ChatGPT can improve nonprofit donor engagement
What is the most important skill of a writer – besides writing – especially in such an everchanging department such as fundraising?
Relationship-building.
Generally, writers have to have a keen understanding of their target audience as well as any details that contextualize that information. This helps writers know why and how communicating something specifically could work.
In other words, writers don’t just build a relationship with the words on the page. They also build on the relationship with the audience, donors in this case, using information they have probably collected over a long period of time.
As you probably already know, the relationship is the key to successful donations.
Why?
Building a relationship includes a level of trust that helps donors really believe in what it is you want to do, why you’re doing it and how you’re getting it done.
In the 2022 Understanding How Donors Make Giving Decisions report by Indiana University’s Lilly Family School of Philanthropy, one of the through lines of the key findings about donor decision-making is that personal connections matter:
I underlined the parts that are related to personal connection. It isn’t just who you know. It’s also about how you know them and why you know them. These connections matter because they help us align our beliefs and thoughts with our actions.
So, without any “clear and consistent communication”, what are you left with?
You’re left with ChatGPT, an AI-powered tool that can seemingly only replicate this connection by downplaying it for the sake of the end goal (more on this later).
Think about it: How specific can you get about a donor relationship with an AI tool? How, for example, can you provide a prompt that captures the years of communications, the increases in donations, the learning experience of the donor, their engagement with your communications or the impact their donation has had specifically?
You might be thinking that you can just include this information in a prompt. Maybe you can.
But here is where the point gets sticky: You’ve outsourced the donor relationship to an AI tool that does not have the emotional intelligence to capture it well.
Furthermore, you are tasked with creating a prompt that strikes a balance between the end goal (i.e., gaining funds from donors) and amplifying the personal connection (i.e., donor-specific information).
And in order for ChatGPT to do that for you, it has to understand what kind of content you want. If that isn’t clear, it’s likely you’ll get something you didn’t even ask for.
And that is the moment the relationship becomes transactional.
When you decide to take the same approach for each donor and use uncredited information unrelated to the donor (more on that later), the message is clear: the donor relationship is simply transactional.
NonProfit Pro – AND ChatGPT – explains this as a reason why AI can’t replace donor engagement:
To me, the circled answer can read as not being nice enough to donors who sustain critical parts of nonprofit work. So, allow me to put it in other words:
Understand that transactions aren’t inherently bad; we all know that donors are called “donors” because they invest in causes they care about.
But when the focus of the donor communications downplays the care and upsells the investment, it skews the donor relationship to the point of disengagement.
It gives donors more reasons to engage in a cause casually because now, it’s clear that their connection to the cause may not matter as much to the organization asking for their continued investment.
As a result, casual commitment may be the association donors make with the nonprofit, leading to rather transactional consequences, including lower retention of donors and lower donor engagement.
ChatGPT downplays the donor relationship
Let’s look at the previous example from Keela:
The prompt here is “Write a donation appeal to raise money for animals in need.”.
The end goal is pretty prominent throughout this letter. I see “support”, “donation”, “directly impact”, “gift” “contribution” and “donation” again to make that clear. “Donation appeal” and “raise money” are pretty prominent here. However, the tone and voice of this letter is very…formal and non-specific.
Ask yourself this: Is this where you would start if you were writing a donation letter by hand?
Probably not.
Here’s an example of bot vs. human copy from RKD Group:
The RKD Group, a fundraising and marketing services provider based in Texas created a video of their own fundraising professionals determining if donor copy pieces were created by a bot or by a human. Skip to 4:02 to see an example of a donor appeal email and test yourself: Bot or human?
In this example specifically, the experts at RKD group are discussing why the donor email was definitely written by AI. It wasn’t just that it was direct and dry. It’s also that it was not intentional and lacked nuance in the language and didn’t really elicit emotion as a result. In other words, the experience was not there.
Instead, what ChatGPT uses is exactly what it was made to use: aggregated information from the internet.
It’s likely that the data set ChatGPT uses does not include details about your donor relationships. So, it’s safe to assume that it will be downplayed by default.
But it does raise more questions about the myth that ChatGPT can improve nonprofit donor engagement.
ChatGPT reproduces biases that already exist
The relationship your donors have with your organization and the causes they (and you) care about usually include people, especially those from marginalized communities. So, you already know that crafting your donor communications requires you to be intentional about how you write about these communities and how you frame your stories.
That means that these stories require accurate information about these marginalized communities.
(You know what I’m about to write.)
ChatGPT aggregates information from training datasets that likely doesn’t include marginalized communities and their voices by default.
People of these communities are not represented by and large across the internet. So, the biases will be there, by default.
And no one knows exactly where this information comes from (more on this later) because ChatGPT and other AI tools aggregate information.
So, you are left with an AI tool that is supposed to make donor communications easier but it does that by using non-cited information with no direct sources about marginalized communities.
The end result of this is an AI tool that reproduces bias, a lot.
Previously, I mentioned asking the right questions to get the right results with ChatGPT. The same thing applies to this concern: The right (read: wrong) prompts will produce answers with a bunch of biases, inaccurate information and perhaps even offensive content.
A clear example of ChatGPT reproducing bias
Willis Turner of NonProfitPro illustrated an example of AI reproducing bias in an experiment. Turner asked ChatGPT to write a fundraising letter using the following prompt:
“Write a fundraising letter about a little girl named Nikki who lives in poverty in Somalia.”
Here is the first paragraph:
Now take a look at the second:
Notice the underlined portion.
The author doesn’t mention this at all (and actually adds even more questionable information in the article that ChatGPT could’ve produced), but it doesn’t get cold in Somalia.
This is a clear example of what some of the limitations of ChatGPT can look like. If your nonprofit actually served Somali people and this piece of information was in the fundraising letter, how drastically do you think it would change the donor relationship?
How drastically do you think it would change the relationship to the people you would hypothetically be serving?
I’d bet pretty drastically because it would call your organization’s authority and trust into question.
Here are a few other examples of how AI can discriminate against marginalized people:
If you use ChatGPT to craft donor communications, how do you communicate that to your donors?
Furthermore, if your nonprofit has values that include transparency and accountability, how do you use AI tools while practicing that?
In other words: How do you remain transparent with your donors and how do you remain accountable to the marginalized, vulnerable communities you may be working with?
These are considerations you must make a decision about before using ChatGPT, in my opinion. Otherwise, there will always be ethical issues with the way you present your work towards your organization’s mission and vision.
Let’s recap
To reiterate: ChatGPT can improve nonprofit donor engagement. However, that comes at a price that makes it a myth.
In order to improve your donor engagement, you have to fundamentally change your relationship to your donors. So, the myth is that you can improve your donor engagement without anything else changing.
By using ChatGPT, you are outsourcing the thinking that would have been done by a person to a non-sentient AI tool that can’t think to begin with, but can study and aggregate information quickly.
Nonprofit leaders have to decide if this is ethical and aligns with their values.
There are a few considerations
I mentioned that ChatGPT reproduces biases that impact marginalized communities. Also, ChatGPT downplays the donor relationship for the sake of the end goal (read: donations), which could discourage donors from investing in the organization any further.
But there’s another one that I alluded to:
We don’t know where the information ChatGPT uses to produce content is coming from.
There are no citations or acknowledgements of anyone who produces parts of the knowledge. As a result, we don’t know how the decision to use said information may impact people or communities. For example, if ChatGPT reproduces copyrighted work, and we don’t know, what happens next if we still use it?
And what if that knowledge is about and perhaps is sourced from marginalized communities, including the ones nonprofit organizations serve?
We’ll just never know. And that is in opposition to the values of transparency and accountability, among other values.
I found this passage about ChatGPT and ownership from Forbes specifically interesting:
I think that most people probably believe that they cannot possible produce offensive content using ChatGPT. And maybe that’s true. But I think it’s also true that people can produce questionable content that may not align with their organization’s values and mission. And if that issue gets raised, how would nonprofits deal with that?
Yet another ethical issue
In January TIME reported that OpenAI outsourced labourers from Kenya, at less than $2 per hour, to make ChatGPT less toxic, including in some of the ways I mentioned before.
How does any nonprofit reconcile this information with the work they do?
In terms of donor communications: How can this information frame, if at all, the way you communicate with donors and build relationships?
I don’t have the answers to many of the questions that I mention here. But I do know that there has to be a kind of intentional roadmapping that nonprofits must do when it comes to incorporating new tools substantially into their work.
This work includes being honest.
The nonprofit sector already undervalues communications
Many nonprofit leaders and leading organizations already value communications a lot less than other operations, so much so that they equate human beings with an AI tool they have to assume to be sentient in some way (even though it isn’t).
Furthermore, some already believe that communications work is easy. So, it’s no surprise that many consistently mention how much time and money ChatGPT will supposedly save.
What IS surprising to me is how easily many nonprofit leaders are willing to introduce an explicit glass ceiling to their in-house communications experts.
Actually, it’s quite disappointing.
No one becomes a great communications professional overnight.
Like anything else, this process takes a bunch of trial and error. However, that trial and error for a writer looks like learning how to think about language intentionally so that you articulate well exactly what you’re trying to say, no matter the context. This is what you hire writers for and why.
Ted Chiang wrote about the writing process compared to ChatGPT for the New Yorker:
Writing is a learning process. So, when nonprofit organizations introduce ChatGPT rather unceremoniously, they are taking away the opportunities for writers to grow, hence the glass ceiling. It has less to do with how fast the AI tool produces content vs. how it does that to begin with.
Chiang articulates this really well in the next paragraph:
What I actually find surprising is that the consistent mentions of time and money do not account for the reasons why ChatGPT appears to solve their issues. In other words, the reasons seem to have more to do with internal processes that need addressing in nonprofits and less to do with saving time and money with a new AI tool.
If your nonprofit has transparency and accountability issues, for example, an AI tool is only going to help you sweep those issues under the rug and give you a band-aid solution.
This is precisely why communications audits matter. It’s important to clearly identify what is and what is not working before introducing anything new or declaring anything new as a solution.
Over to you
If there is already a devaluing of communications across the board in a nonprofit organization, ChatGPT will only further cement it, in my professional opinion.
ChatGPT works best when the fundamentals are in place and the intention is clear. I know it’s a really cool tool but it requires careful use and planning to learn how to use it best without devaluing what and who you already have on your team.