Algorithms and Artificial Intelligence are everywhere. For most people working in communications we’ll already be aware of them and what they do, but a tweet by the Chair of the CIPR’s AI in PR Panel, Kerry Sheehan made me stop and consider what the wider implications of the use of these technologies are now and what they will be in the future.
For the world at large, understanding of how mathematical formulae can shape their lives – from the posts that they are shown when opening up their social media accounts, to how their car insurance premiums are calculated – is probably minimal.
All of that has changed with the complete omnishambles around the awarding of A-Level grades this month, thrusting these calculations (and their flaws) into the limelight.
Just like when the Cambridge Analytica scandal surfaced and people became more aware of the data that organisations were collecting and holding about them, so the flawed process of awarding grades based on a mathematical calculation will likely mean that society as a whole is much more aware (and wary) of algorithms which affect their everyday lives.
As Kerry’s tweet sets out, not only must the use of AI and algorithms be completely transparent and the wider implications of the results they generate be considered, but there should be a role for communications professionals right at the very outset.
By ensuring that the people devising the algorithms and building the AI tools are representative of the end users they will serve, we’re much less likely to see failures on the scale of the exam results fiasco.
And, where issues do occur, having a comms professional there as a voice of those stakeholders will be crucial for heading off issues at the pass – it’s what we do every day.
Artificial Intelligence, when used responsibly, has immense power for good and to shape the communications industry (and society) in a positive way. The work of the CIPR’s AI in PR Panel shows how AI can help comms professionals interpret huge volumes of data effectively and improve the way that we all work.
It’s crucial that the principles that underpin the work are ethical and take into account not just considerations for our organisations and clients but also the lives of people whose data will be used and will be affected by the results.
As communicators we should also act as the voices of these people when dealing with the people developing the algorithms, providing challenge and critical advocacy to help our employers and clients spot the issues before they become problems.
And, in situations where things don’t go quite according to plan, we should provide transparency, clarity and honesty around what, how and where things went wrong.
This will foster trust rather than suspicion about a technology that will become increasingly part of our lives in the future.