Mike Zywina, Director at Lime Green, shares some reflections on what we've discussed and learned so far about using AI... I know what you’re probably thinking…the last thing you want to read is something else about AI. But stay with me a minute. I think this is going to be helpful. At Lime Green, we've been slow off the mark to think seriously about Artificial Intelligence (AI) – partly due to limited capacity, but also to give us time to listen to people’s very varied views, and see how it’s being used by others. Because we know there’s great potential, but also great concerns. Recently we decided it was time to dip our toe in the water. Not by immediately using it in our consultancy work, but by thoughtfully exploring what a good AI strategy might look like. What we've learned has been helpful for us, but also highly relevant for charities and social enterprises too. We’ve been lucky to work with the fab Simon Allen, outgoing CEO of Age UK Bath & North East Somerset and co-founder of Level.ai, who aim to help charities, social enterprises, and ethical businesses use AI for good. Simon took our team through a series of reflection and planning workshops (it was nice to be on the other end of workshop facilitation for once!) Our starting question wasn’t: How can we use AI to save time? Or rather, it was much broader than that. More like: what should AI mean for an organisation looking for creative solutions to help a sector in crisis, but also wanting to maintain an ethical, values-led approach, all while being instinctively suspicious of any "next big thing"? In other words, how can we balance the potential benefits of AI against the known risks in terms of accuracy, authenticity, originality, data security and the environment? This starting point has led to some fascinating conversations. I'm still in learning mode and very much not an expert, but I wanted to share what we've learned so far: There’s little sense in automating specialist tasks firstYes, there’s potential for AI to help us with tasks like writing funding applications and strategy reports. But this isn’t easy to do without compromising on quality and humanity. Plus writing is something our team feel very passionate about, and hugely enjoy. They want to be writers, not AI prompters. On the other hand, we do plenty of tasks that are less fun and require less specialist skill. Deciphering and writing up handwritten notes. Sifting through hundreds of post-its to group together ideas and remove duplicate points. Copying and pasting template application text into the right boxes on a form. Collating feedback after training courses. These tasks need to be done with care. We wouldn't automate them until we were sure we could do it reliably and accurately. But this is likely to be much easier than automating highly-skilled tasks where creativity and humanity are essential. So what would be the sense in starting with the problematic, complex bit? If you’re just starting out with AI, it’s so easy to be drawn to the eye-catching things you see other people doing. Yes, there’s still novelty and wonder in seemingly creating content out of nothing. But that doesn’t mean you should do it too. Start with automating the faffy tasks that nobody would cry about never doing again. The way forward is custom-built, paid-for tools and careful promptingAny of us can open ChatGPT today, feed it some text about our organisation, and ask it to draft an answer to a funding question. But what happens to the information we share? How is the answer crafted? Does it sound like us? There are many legitimate concerns with AI in terms of data security, accuracy and originality. But these often stem from people using the wrong tools and the wrong prompts for the job. Free tools seem attractive, but the real cost may be that the information you provide will be used to train the model, compromising your data. To quote a common tech world saying: if you're not paying for the product, you are the product. Better to find UK-based, GDPR-compliant platforms (read the privacy policy carefully) where you can pay a moderate subscription fee then untick the boxes that give permission to use your data to train the model. Also, make sure you’re using the right tool and the right prompts for the job. If you ask Microsoft Copilot to simply draft, for example, an expression of interest for Esmée Fairbairn, the answer will sound generic, because the text it produces is literally the average of millions of other people’s writing. And it can’t tailor the language to funding guidelines it doesn’t know about. But creating a tool specifically for funding applications, inputting lots of examples of your writing to ‘train’ it to sound like you, then giving a detailed prompt that reflects what you know about a funder and what they say they’re looking for? That's far more likely to produce a decent first draft you can work with. Taking a phased approach will maximise learning, maintain quality and minimise overwhelmWe initially identified dozens of tasks we could theoretically automate. But if we tried to do all that, we’d burn through time and money, and probably tear our hair out. So we’re planning a phased approach which starts by automating simpler, lower-risk tasks (see above). We’ll see how this works, what goes wrong, whether we actually save time. And we’ll make sure a human is always involved to meticulously check what is produced, knowing what problems to look out for. For example, we’re exploring whether we can create a tool that takes the content in a case for support and uses it to populate an extremely rough draft of a common funding application, such as for Reaching Communities or Garfield Weston. We know from experience which sections of a case for support we typically pop into which application questions, and what type of information is particularly relevant. AI can potentially speed up this initial process. Then a human can take over and do the highly skilled tailoring and editing. If this works well, we might eventually look at using AI more extensively in bid writing. But frankly, if AI can’t do a good job of creating a rough starting template, we’ll never trust it with more complex writing tasks. Moving slowly will allow us to maintain quality, use what we learn to create better tools, and avoid overwhelming the team with too many new platforms and processes at once. AI has uncomfortable ethical and environmental implications, but that's nothing new for the charity sectorAs a team, we've had some challenging conversations about the environmental and ethical implications of using AI. Firstly, this as a good thing. It’s so important to create a space for these discussions. When people are honest yet open-minded about their concerns, this only strengthens your approach. We know that AI has a significant environmental impact, and big tech corporations have built tools that harvest and profit from people’s creative content in ways that are ethically very questionable. We also know that the charity sector is in crisis and would hugely benefit from any new approaches that save time and money. And we know that some AI tools – and some ways of using them – are better and worse than others. Finally, AI isn’t going away, and the environmental impact should decrease as technology improves and as more people understand how to use it responsibly. So, we can choose not to participate (honestly, we’ve considered it). Or we can explore it while being open about the negative impacts, learning and sharing everything we can, and advocating for a better approach. And we can moderate our own usage, for example by not using AI for things a conventional search engine can help with just as well, or for frivolous trend-chasing (AI action figures, I’m looking at you). When it comes to ethics, I've realised there are some parallels between AI and fundraising. Just as the AI industry is built on stolen data and environmental harm, so too are the fortunes of many funders and philanthropists built on stolen wealth and environmentally catastrophic business practices. Charities need their money, but don't have to like or remain silent about where it comes from – and can have red lines about what is acceptable and not, aligned with their values. Similarly, we need the efficiency AI can bring, but we can be mindful and vocal about the ethical and environmental cost, and discerning about which tools we use. Saving time isn't your why – be purposeful about your end goalYes, AI can potentially save huge amounts of time, but that’s not guaranteed to be A Good Thing. After all, capitalism doesn’t have a great track record of translating technological progress into social gains. The loom, automated factory assembly lines, and the Internet have all been hailed as gamechangers that will transform people’s productivity and quality of life. But while the rich have got richer, everyday people have to work harder than ever. Today, for all our technology, our mental health is in crisis. So a good AI strategy should have an end goal that is about more than saving time. For us, we hope that finding significant AI efficiency savings would ultimately help us to achieve savings for charity clients and pay our team members better, as well as making a modest improvement to profit margins. In the current landscape, it is incredibly difficult as a small business to balance all these things, and perhaps AI can be part of the solution. For charities and social enterprises, using AI shouldn't just be about saving time and money, but relieving pressure and improving working conditions for staff where possible. So, what happens next?Now we have a vision of how, when and why to use AI, we'll begin the process of creating our own custom-built tools. But we won’t start using these until we have created and published a transparent AI policy.
This will be underpinned by a few key principles:
These principles will help to ensure our use of AI is a net positive for our work. Whether you're currently an AI enthusiast or an AI sceptic, I hope they'll prove useful for you too.
0 Comments
Leave a Reply. |
Want to receive this regularly by email?
Categories
All
Archive
May 2025
|