In Conversation With: Stephen Waddington

A white ampersand in a grey roundel. This team member has requested that we do not share their face online. Caterina Sorenti | 31 May 2023

The spotlight on generative AI tools has sparked a debate about all of the potential ways corporate communications will be affected by this technology shift. Whether it is finding a balance between using these tools, while maintaining authentic communications, and even ensuring the verifiable truth is not lost, these new AI tools have changed the landscape of online content production

Stephen Waddington, founder of Wadds Inc., a professional advisory firm for agencies and communications teams, and a PhD researcher at Leeds Business School, spoke to us about the potential for using generative AI to create content; targeting your key audience groups and how to future-proof your career in communications.

Caterina Sorenti (CS): Our clients are in charge of many of the world’s biggest companies’ corporate websites and social media channels. What are the key ways in which generative AI is likely to impact their work?

Stephen Waddington (SW): It will create content and supplement the role of any function the client has created content for, whether a domain or specific topic or subject. You can ask AI to create copy or blog posts for a website, and this can be done either through a ‘query’, i.e., “write me a blog post”, “write me a piece of text” or “write me a website page”, or you can provide the AI with a block of content and ask the AI tool to summarise it into whatever type of content you like. Similarly, this can be done with social media posts, both in terms of text and images.

There are risks with these tools, especially in terms of ChatGPT. These technologies have been available for five to ten years but, for the first time, it’s completely democratised, so that anyone can access the technology. If you ask the AI to write a piece of content, you will need to fact-check it because of what is called a ‘hallucination’. These tools work on a predictive basis, based on word association, and will make stuff up, it is that simple. If you asked it to write a biography of me or you, it would most likely get the first paragraph right based on the content it scrapes from around the web (in my case probably from my Wikipedia page), but then it will start to make stuff up after the second, third and fourth paragraphs. So you absolutely have to fact-check your content. If you provide it with a data set, for example, my 1000-word academic biography, it will shorten that and do a reasonably good job at editing it and providing a first draft.

The algorithms for the AI tools are all based on Western European and US-based language. So, if you look at the GPT data set that they published, it is based on Wikipedia, Google Books and aspects of the web, which is all largely Anglo-Saxon content. That sort of bias is therefore inevitable. Also, these systems are being created on the West Coast by largely middle-class white Americans, so this is an absolute danger and any article about ethics or rights raises this as an issue in this space. One of the things that you can do to counter this is to ask these tools for the perspectives of an audience that you might not be able to access. These perspectives will also help with the accessibility of content produced by AI.

CS: You’ve likened the rise of generative AI to the arrival of the web in the 1990s and the growth of social media around 2005-2010. Do you think the impact of generative AI on communications is going to be as big as these past technology shifts (or, indeed, bigger)? If so, why?

SW: It’s a moment of disruption. If you think back to the web, it changed the paradigm of how content was published. It changed the paradigm of how we as practitioners were allowed to engage audiences through our content. First of all, we did that through the web, and secondly, we did that through social media, creating profiles and accounts and building communities. Previously, we’d had to use either forms of media that weren’t scalable or media relations as a conduit, so we got the opportunity in both these instances to build our own means of community. It was truly, truly disruptive to public relations and communications, and I think the same thing will happen again, for a different reason. This time, in terms of the scale of being able to create content and tailor that content, we now have a tool to help and support us in doing that.

CS: In what practical ways can corporate communications professionals use generative AI tools to improve their work right now?

SW: First of all, contextually understanding the perspective of your audience, or understanding the context of the reader that you are intending to create content for. We all bring our own biases to the situations we are writing about, but you can ask these AI tools like ChatGPT to provide the context or the key issues for your intended reader or audience. It might be young parents, or people buying a house. If you ask, “what are the issues that are front of mind for ‘X’ audience group?”, the tools will immediately spit out the context for that.

Secondly, it will create content for you and can produce either the outline of a document, or a decent first draft. The issues related to ‘hallucination’ and making things up aside, you’ll need to edit and fact-check everything. There is also an issue with these language models in that the data set for GPT-4 was created based on content published two years ago (and before then), so it hasn’t got real-time content. If you want to add contextual information, you’ll need to get that real-time information from somewhere else.

The final thing is producing different versions of that content. From a corporate communications point of view, maybe you need a blog post or a news article, or fifteen different versions of a Tweet or Facebook post. The AI tools will do a reasonably good job at providing a first draft for those, particularly if you provide it with a source document. An application I use AI for all the time is uploading a PDF or text from a larger document and asking it to create summaries of that.

CS: It’s so important to note that some of these language models don’t keep up with real-time events, as you said. If companies had events or controversies happening in real-time, or they wanted to collect an audience’s wider consensus on contemporary events, it would prove challenging to do that. 

SW: There are several large language models, GPT-4 (which ChatGPT is based on) is getting the most attention, but there is also Bard from Google. Bard does incorporate contextual information based on content from the web, so if you ask it for an example of a situation it will pull that from the web and probably be contemporary.

CS: What advice would you give corporate communications professionals who are keen to future-proof their own careers amid the rise of generative AI?

SW: We go through this sort of paranoia every time a new form of technology emerges. We thought it was going to happen with the Internet, or with social media, and now it is going to happen again! Actually, I think that if you are smart and you keep ahead of the technology then it just improves what you are doing.

One thing we haven’t mentioned is that the tools are in quite a nascent state at the moment, it’s really either ChatGPT or Bard. But I think the exciting moment will be when this technology starts to be integrated into applications like Microsoft Word or whichever word processor you use, because then it will be sitting alongside what you are doing. Microsoft have said they are going to create a tool called ‘Co-Pilot’, i.e., the AI tool sits alongside you and is supporting you. I think we will most likely see a lot of energy and interest around the development of writing skills and in all honesty that has to be a good thing.

The contrary argument to that is we could fill the web with a lot of nonsense and create a singularity where everyone is asking the same queries and creating the same articles. Google I/O (Google’s annual developer conference) have said that they are tweaking the search algorithms to stop that being a result of the Microsoft tool.

CS: Is there a risk that the verifiable truth will be harder for people to find as AI-fuelled search takes off?

SW: I run an agency and the issue came up recently about finding the truth, and how the truth is a dynamic thing, so this is really important. I do think we will see organizations incorporating fact checking, certainly, into governance information.

CS: The communications world - and wider business world - seems to be divided between those who see the arrival of generative AI as a potential cataclysm, and those who think it’s the most excitingly positive thing since the internet. Where do you think communications professional should sit on that spectrum?

SW: This happens with every new form of technology: you get the progressive people that are keen to adopt it and try it, and it follows the classic tech curve. I think its beholden on anyone who wants to future proof their career to think about the issues and the potential to use this for good.

CS: What are some potential ethical concerns that may arise with the use of generative AI in corporate communications?

SW: We previously discussed the main two: making things up and the bias in perspective. There is another key issue relating to copyright. If the AI generates content, who owns the copyright? At the moment in the UK, it’s the person that wrote it, but in the US, it is the machine that wrote the code. This has also been discussed in relation to images as well: if a language model is based on whoever owns the new iterations of a lot of stock photography, this poses an issue too. To sum up, it’s copyright, bias and fact-checking.

Stephen Waddington was speaking to Caterina Sorenti of Bowen Craggs. If you are thinking about using AI tools to assist in creating content for your corporate website, and would like to discuss this further, you can connect at