Lately it feels as if AI, or artificial intelligence, is cropping up in every discussion, news article and professional forum. It’s everywhere; we can’t avoid it. But what is it? Or what is it not?
At its simplest, it’s a system or tools designed to simulate and exhibit human characteristics. The type of AI we are most commonly referring to and using now is ‘generative’ AI. It’s so-called because it generates something new, from other things. It combines or transforms existing things, drawing data together. Its output isn’t necessarily the answer, it’s an answer.
This is important because, although thinking about AI in this way can help us set aside sci-fi images of machine consciousness and enslaved humans, we still need to consider the ethics of its use. Urgently.
Thinking about values and ethics
In the charity and wider third sector we seek to be a force for positive change. But we don’t need to be in the vanguard of social progress to understand that AI will change our lives forever. Much like the wheel.
At Charity Finance Group (CFG), we’ve been looking at AI from different perspectives, as a spectrum of issues, and we’re starting to develop a framework for thinking and policy. This is crucial because AI is not only advancing fast, it’s also being experimented with.
The chances are that your staff and volunteers are using it in some way – perhaps unknowingly! – and so too are your beneficiaries, clients and customers. And we also know that regulation lags woefully far behind the pace of technological innovation and adoption.
It’s vital that we create a framework that has our values as an organisation – and a sector – at the heart of it. To share something said at a CharityComms seminar we recently attended, technology is often harnessed by those with poor understanding of the issues, or unethical intentions.
Therefore, using AI could lock in a bad set of values for the long term, such as bias against people from different backgrounds or cultures, as discussed below. “The charity sector can’t be caught snoozing.”
As AI starts to have an impact on job roles and profiles, it’s even more crucial that we understand what it can and can’t do for us – and what the risks are. For example, AI could be used to support the recruitment process, by sourcing, sifting and screening talent. It can supplement bid writing, taking the pain out of drafting lengthy documentation. It can generate campaign materials and messaging, and test how it’s landed with different audiences.
The possibilities are endless – and not yet fully understood.
Human resources: will AI replace us?
This is the big question, and we’ve already seen headlines of organisations ushering in a new era of AI. BT is cutting around 55,000 jobs1 and expects AI to replace around 10,000 of them.
IKEA is taking a different approach (judging by the headlines, which count for a lot when considering brand and reputation!). The company is retraining call centre workers as virtual designers, rather than switching to AI and making redundancies. It’s a business gamble perhaps, but this speaks volumes about the company’s intentions and values.
One of the most powerful takeaways for us in recent weeks is that AI can (should?) be used to replace tasks, not roles. Charity leaders may soon have to make some difficult choices on how it deploys AI and how that impacts jobs. So, align your values and choices and communicate them well.
A question of bias
Being truly inclusive is important for us at CFG. We’re not only mindful of potential bias when we recruit, but also when we communicate our stories and those of our communities. We’re not unique in this and many charitable organisations proactively challenge and reduce bias.
This is why the sector must approach AI with caution: generative AI can only give us what it has already been given. Or put another way: bias in, bias out. Research shows time and again that there’s still a long way to go in creating a truly unbiased AI system2.
Be mindful that the AI tools you use could be presenting a narrow world view that bakes in bias or simply distorts reality. Whether we’re recruiting staff or creating content with AI, we must check and test for bias. And yes, there is a role here for our sector to continue pressing for better and doing better.
Sources and transparency
At CFG we’re not currently using generative AI, but we’re keeping an open mind. We’ve dabbled with the free tools, such as ChatGPT, and have been impressed with the content it generated. But it undoubtedly needs an expert eye.
We like the look of Claude, not only because it can process so much more contextual information than other tools, but because it is underpinned by its constitution, which was based on the United Nations Universal Declaration of Human Rights.
We must continue to be transparent and rigorous in the creation and sharing of information and data. That means knowing and citing our sources, including checking any found through the use of AI. If we can’t do that, we open ourselves up to a range of problems and the burden of reputational damage may fall on individuals, not only the board or organisation.
This is where professional networks – such as CFG’s membership community – will remain a vital source of knowledge and expertise, and be even more valued in the fight against disinformation.
A brighter future
Ultimately, AI will revolutionise how we understand, share and manage data and information. It will change how we create content, and it will feed into our everyday communications and workflows.
Harnessing it in the right way can help us do so much more, for so many more. This is truly exciting. But it will only be for the good if we recognise inherent risks and limitations, and build a framework to manage them. We not only owe this to our staff and volunteers, but also to our beneficiaries and those communities we serve.
And by being more intentional in our use of AI, we can pass mundane and repetitive tasks over to the machine and free up time to be truly creative and transformative. AI can help us to be the change we want to see in the world.
Opinions expressed in this publication are not necessarily the views held throughout RBC Brewin Dolphin Ltd.