What funders are saying about AI and what it means for fundraisers, based on 20 statements11/12/2025 Mike Zywina, Director at Lime Green, explores what funders are currently saying about grant applications written using AI. If there's one topic we’ve been asked about most during our fundraising training courses this year it’s – perhaps unsurprisingly – AI. Should we use AI for funding applications? What are the benefits and risks? What will funders think if we use it? Are funders using it themselves? We’ve previously shared our general thoughts about using AI, but what about the funder perspective? With around 10,000 trusts & foundations in the UK, and AI still a rapidly-evolving area, it's tricky to form a clear picture of what’s happening now – and how it might change in future. Not a rabbit hole that most people feel like descending right now. Luckily, I’m a glutton for punishment and love a bit of research, so I’ve spent time scouring the Internet for funder AI policies so you don’t have to. Here’s what I’ve found out. First, the methodologyArmed with strong coffee, I spent a few hours searching for funders with published guidance on the use of AI. I specifically looked at many large, national household names, did some Google searches and, yes, used Gemini AI (semi-successfully). Where a funder had published guidance, I assessed their overall stance on using AI (from strongly positive to strongly negative), summarised their overall advice, listed the pros and cons they mentioned, and noted whether they use AI themselves. By the time I reached 20 funders, I had a headache. Caffeine or information overload, who knows? I also felt like I had a decent sample size, including arms length bodies like Arts Council England and National Lottery Community Fund, national funders and local funders. It was getting harder to find new policies, so I stopped. You can take a look at the raw data here. We’re be open to maintaining this as an ongoing resource for the sector, if that’s helpful. A note on terminology. When I talk about AI in this blog, I mean generative AI: ChatGPT, Gemini, Copilot etc. Some headline newsMost funders were either neutral or generally negative about the use of AI. While some funders touched briefly on the potential benefits, most went into more detail about the risks. More on that below. The most common view overall was “use with caution”. Funders won’t penalise you specifically for the act of using AI, but, in their experience, there is a decent chance it will make your application weaker. Only one funder seemed to have an overall positive stance on using AI – perhaps unsurprisingly, a tech funder. And only one funder had a strongly negative stance – and they focused on the impact of using AI in policymaking, rather than in funding applications. 60% of funders declared openly how they are using AI themselves, with not a single one currently using it for assessing applications and decision-making. Good news for anyone worried about a world which mostly involves computers talking to other computers. However, some funders are using AI to support things like their processes, grant administration, and meeting notes. Several funders acknowledge that this is a fast-moving area and clearly plan to keep their decision under review, suggesting they might start using AI more as the technology improves. That’s the headline news. I’ve gone deeper into a few trends below. Funders are keen to monitor who is using AIMany funders now ask applicants to declare whether they have used AI. Given that they insist this won’t be a factor in their decision-making, I assume this is primarily to monitor how widely AI is being used, and its impact on quality. Presumably, funders will soon have sufficient data to break down the success rate of applications written with AI, without AI, or partially helped by AI. I’d be fascinated to see this data published in future. It’s a reasonable approach but, as always, the challenge is whether the question will yield honest answers, given the power imbalance in the relationship. A bit like the questions some funders ask about how long their application took and any suggested improvements to their process, you have to wonder whether fundraisers will trust that giving an honest answer won’t disadvantage them. The number one issue with AI applications: they seem generic and inaccurateThe most common concern that funders had was the quality of the output. This theme came up repeatedly, exemplified by these comments:
Of course, the question is whether funders truly know if an application is assisted by AI. It’s hard to tell whether this a fair judgement on the overall quality of AI applications, or just the obviously bad examples. But there’s an important point here. Large Language Models (LLMs) are created using a vast number of written work examples. They are, quite literally, an average of everyone’s writing. So what they produce will sound average. It would be a miracle if the language didn’t sound generic and monotonous, without the same phrases cropping up repeatedly. Not great if you’re a funder reading through 50 applications that day. And that’s before we get into AI’s tendency to “hallucinate” and make up statistics, sources etc. Several funders were keen to emphasise that they don’t assess applications based on things like spelling and grammar. The “soul” of an application is evidently more important than its superficial appearance. If we take funders at face value on this (and I do wonder if they overestimate their ability to strip out the unconscious bias when assessing applications), then this is encouraging to those fundraisers who aren’t natural or prolific writers. It also explains why AI – with its focus on format over content – isn’t proving hugely effective for application writing. While some funders think AI potentially levels the playing field, the logic here seems questionableWhere funders spoke positively about AI, they often cited how it can benefit grassroots community groups, people with little experience of writing applications, and those facing language barriers. Particularly for these groups, some funders recognised the potential of AI to save time, improve accessibility, and support creativity. The implication is that it might help level the playing field for people who find writing applications hardest, even if it doesn’t enhance the work of seasoned fundraisers. Reading their full statements, I can’t help but feel sceptical about this. It feels like a contradiction to simultaneously say that AI produces generic, inaccurate writing, but also helps people who lack the time, skills and circumstances to write a good application. Being able to harness AI to write applications seems to depend heavily on your ability to access more sophisticated tools, write clear and intentional prompts, and spot what mediocre AI writing looks like. Logically, that will be harder for organisations and people who face barriers, not easier. In which case, AI seems more likely to widen the difference between good and bad, not reduce it. The legal, ethical and environmental issues with AI are a common, but not universal, concernWhile funder more often focus on quality, many also refer wider implications such as the security of data (especially when using free tools), the lack of AI regulation, in-built biases that perpetuate discrimination, the erosion of copyright and intellectual property, and the huge environmental impact of using AI. Unsurprisingly, sector-specific funders tend to zone in on the most salient issues for them. For example, Arts Council England voice concerns about the moral and legal rights of creators, Joseph Rowntree Foundation cite the risk of “scaling injustice”. In general, funders in the arts, science and tech sectors – for whom concerns about the use of AI are likely to be directly relevant to their work – tend to have more detailed policy stances that look beyond the quality of output. So what does all this mean for fundraisers?Firstly, read what funders are saying about AI, and heed their advice.
Many funders have impressively detailed and balanced statements on using AI. This is clearly a live issue, prompting plenty of reflection. Their advice on the risks of AI, and how to use it carefully and responsively, is generally sound. Their insights into how the use of AI impacts their assessment process – whether or not we personally agree – are significant. Think of AI as a tool that can potentially support you with certain aspects of the trusts fundraising process, including as a sounding board, particularly in areas where you might personally be weaker. But only if you use careful, specific prompts, check the results carefully, and take steps to safeguard your data. Don’t see AI as a primary solution to the current funding crisis. It’s unlikely to save you time, or improve the quality of your work, as much as you hope. This is really important to keep in mind when we’re all under pressure to save time, and AI companies are busy promising us the moon on a stick. And if you don’t trust AI to draft your application, consider whether you should trust it to edit it. In asking AI to critique your draft or cut it down to the word limit, be mindful that there’s still scope for it to strip out originality and that language you’ve deliberately used to mirror a funder’s guidance notes. Finally, as fundraisers navigate the use of AI in their own work, it’s also something our organisations will increasingly need to take a stance on. Funders are alive to the wider legal, ethical and environmental issues. Even if AI improves the quality of your work, you may well get asked whether it’s in line with your mission and values, particularly if you work in a sector where these concerns are most relevant. Read more about our own views on the use of AI here.
7 Comments
11/12/2025 09:58:13 am
Thanks Mike for doing this research --a great blog post.
Reply
11/12/2025 07:36:17 pm
Thanks Katherine. Glad you found it helpful!
Reply
12/12/2025 12:24:01 pm
Thanks Mike, this is really interesting.
Reply
12/12/2025 03:37:48 pm
Thanks Anthony. A few people have said that, so this is definitely going on my 2025 to do list! I'll be fascinated to see how these funder statements evolve over time.
Reply
17/12/2025 07:38:05 am
This was an interesting, and useful read, Mike, thank you.
Reply
26/12/2025 07:36:05 am
Thanks Mike. I have used AI as a way to help organising scattered thoughts. I have a high level of anxiety and brain fog when approaching bids so it seems helpful to achieve coherent narrative from an overstretched (potentially ADHD / overtired ) brain type. However, further editing is always needed and this blog was a salient reminder that there really isn't a substitute for making time to write comprehensive bids.
Reply
Leave a Reply. |
Want to receive this regularly by email?
Categories
All
Archive
January 2026
|
RSS Feed