Hi everyone, and happy Spring if you are in the Northern Hemisphere. Last week, I moderated a conversation on Artificial Intelligence and how it might affect our sector. On the panel were Beth Kanter and Allison Fine, co-authors of The Smart Nonprofit, and Philip Deng, creator of Grantable, an AI-supported grantwriting platform. Here is the full video if you’d like to see it. Below are a few points I took away from the conversation with these experts. Those of you who are more knowledgeable in this area, please feel free to add your thoughts in the comment section or correct anything I got wrong (By the way, ChatGPT came up with the title of this blog post).
1.It’s natural to feel some combination of fear and excitement about AI: Beth, Allison, and Philip brought up the fact that throughout history, people freaked out over new technology all the time. Philip mentioned how the sewing machine brought fear that tailors and seamstresses would be made obsolete. Allison and Beth’s book detailed the freaking out done by doctors when blood pressure cuffs were invented. I’m still freaking out over broccolini. It’s OK. While the benefits of some technology, such as social media, are still being debated, many of the technology we were worried about became very beneficial.
2.There are lots of cool things AI can help us do. Addressing climate change. Helping refugees. Many organizations have started incorporating AI into their work. For instance, using chat bots on websites to instantly answer commons questions 24/7. Another org uses AI as a way to train service providers. Another nonprofit uses AI to provide personalized appeal letters to donors. More and more people are using AI platforms such as Chat GPT to come up with first drafts of thank-you notes, outlines for presentations, press releases, etc.
3.No, AI is not going to take over our jobs. At least not all of them. People will still be needed to pilot a lot of things and to provide oversight and to ensure the work is human-centered. Beth and Allison call it “cobotting,” which is the collaboration between humans and technology to get stuff done. So, while it may lead to some job loss, and change the way some work is done, it may also open up new opportunities as well. No need to worry too much. However, as AI becomes more and more prevalent in our work, the folks who are familiar with and experienced in using AI will likely have an advantage over those who don’t.
4.AI can free up our time and allow us to focus on what matters: If we can use AI effectively and thoughtfully, it can take on a lot of tedious, time-consuming tasks. For instance, robots stocking shelves at a food pantry. Grantwriting, as another example. A lot of grantwriting is just translating the same information from one funder’s burdensome and self-indulgent application to another funder’s. I strongly believe all grant applications should go away, and every organization should just have one comprehensive proposal that they use for every funder, with no tailoring. Until that happens, we can save tons of time through AI-enabled grantwriting tools such as Grantable, which will allow us to focus on service delivery and other tasks.
5.AI can help balance out some inequity: In some ways, AI can provide services that currently only larger and thus more white-led organizations can afford. Grantwriting is again an example of this. The organizations that can afford professional grant writers have an advantage over those who don’t, the latter more likely being organizations led by marginalized communities. This creates a self-reinforcing cycle. Until funders take a more equitable approach to distribution of funding (by ensuring that funds go to the communities most affected by systemic injustice and not just which orgs can write the “best” proposals), AI can help balance things out a bit.
6.Smart technology, however, can worsen things: In Beth and Allison’s book, they bring up examples of technology being deployed with grave consequences. For instance, a tech-enabled intake process intended to prioritize the order in which people experiencing homelessness would receive services ended up putting white clients at the top of the list. This is because most people writing algorithms used by smart technology are still white, especially white men, and so biases are naturally and often unconsciously built in. All of our trainings and conversations around equity must be applied to AI, or it has potential to worsen the issues we’re fighting.
7.We need to think about how AI can help or hinder accessibility: There is potential for AI to help make things more accessible. For example, AI can describe things for people who are blind or have low vision and automatically add captions when people talk, or at least be trained to recognize and point out when we are not being thoughtful about accessibility. It can be taught to analyze our websites, videos, etc. to see if the language we’re using is too jargony, academic, gendered, or otherwise inaccessible to people, and make changes. At the same time, again, those of us teaching it to do these things need to constantly be aware of and regularly check our own biases, lest we replicate it in AI.
8.We need to think about the ethics around AI: There has been a lot of conversation in society about AI stealing from the work of artists and writers, who are already underpaid. There are also issues of privacy, where data is stored, whether we are complicit if that data are used for nefarious purposes, etc. Our sector needs to think about things like whether it’s OK for an AI robot to comb through donors’ social media and then generate personalized letters to them (“Dear John, we hope you enjoyed the Adele concert in Vegas. Speaking of rolling in the deep, how about a donation, because you’re rolling in dough from that new promotion and we’re deep in our capital campaign…”) A solution everyone seemed to agree on is the need for transparency, especially the disclosure of when AI is being used.
9.We need to advocate for laws to keep up with AI: Technology, as usual, advances a lot faster than the laws regulating it can keep up. Only relatively recently, for example, are legislators discussing protecting kids from being exploited by their influencer parents forcing them to work producing content, decades after thousands of children have been traumatized. Our sector, as we start using AI with more frequency, must be aware of problematic issues and push for legislation to ensure smart technology works for us and is not used against us, taking cue from laws such as Europe’s Right to Be Forgotten.
There was plenty more that Beth, Allison, and Philip covered, and with much more depth, but unlike AI, I am tired and Abbott Elementary is not going to watch itself. Overall, AI is here, it is not going to go away, it will probably change the way we work, often for the better, occasionally for the worse, and there are all sorts of ethical and equity issues to consider, and we all need to pay attention.
While you ponder all that, here’s a sonnet I asked ChatGPT to write to amuse and/or terrify you:
“Nonprofit Renaissance: A Sonnet on the Impact of AI”
Oh, how AI will change nonprofit work,
With algorithms and machines so smart,
Tasks that once took hours now done with a quirk,
And insights gleaned from data’s every part.
No longer will we labor through the night,
To sift through spreadsheets and forms of paper,
AI will swiftly sort with pure delight,
And help us make decisions that are greater.
The impact of each dollar now so clear,
As AI reveals where funds should be spent,
We’ll serve our causes far and near,
With efficiency and ease that’s heaven-sent.
So let us welcome AI with open arms,
And use it for the greater good’s true charms.
—
Order Vu’s book “Unicorns on Fire: A Collection of Nonprofit AF Blog Posts Finally Edited for Spelling and Grammar.” Revenues generated from sales of this book from now until end of June will be donated to relief efforts for the earthquakes in Turkey and Syria.