AI, ChatGPT and LLMs are hot topic in the PR and Communications industry. Find out what agencies are saying about the future of the industry.
So, I decided to take a look at what PR agencies are saying about this technology to understand: how agencies view the use of these tools in communications; what problems and issues may arise from the use of these technologies; as well as the possible benefits. Not trolling, just simply the truth.
This post deals with what agencies have written about ChatGPT and LLMs specifically. I have not looked at articles about the application of AI in video, graphics, and audio. AI technology for visual/audio communications is another universe entirely. Perhaps another post.
In terms of methodology. I put together a list of 57 agencies–both BigPR and Indy firms–18 and 39 respectively. Not an exhaustive sample, but a decent cross section. I could have asked ChatGPT to build me a list, but I created the list based on firms I know personally and professionally. The Bigs are all among the top-20 global firms, and the Indy shops are all in the US and are for the most part run by people I know and respect.
First, I sent a human to look at each agency’s blog and LinkedIn to see if they published on the topic. We then scraped the blog posts into a Google Doc for re-processing and analysis by humans and ChatGPT. More on that later.
The first question we asked was, has the agency published on the topic, and did they generate original content? If not original content, did they repost something from another source such as a media outlet or influencer?
Our first finding was that Indy firms far out-paced BigPR in posting on the topic by a margin of 2.5 to 1, and Indy firms were also much more likely to post original content than BigPR by a margin of 2.8 to 1.
Next we summarized what these PR firms are saying about the potential dangers of this technology, as well as the benefits and potential use cases of ChatGPT and LLMs.
To summarize a lot of wordy-word-type words (more than 25,000), we used ChatGPT to analyze each blog post (we did read them as well). The prompt sequence we used in ChatGPT was as follows:
We then took the ChatGPT summaries of each article and asked ChatGPT to summarize the findings of all of the articles along the lines of concerns and benefits to get a set of meta-conclusions raised by all the authors. In all there were approximately 18-20 mutually exclusive conclusions, both positive and negative, made in the 28 articles we analyzed. We discarded a few due to relevance and quality.
Overall, the industry is fairly balanced and clear-eyed about how ChatGPT will impact the PR/Comms industry, with the prevailing view being the need for ongoing awareness, curiosity and caution about the risks and benefits associated with using AI in a creative context.
The issues of concern raised by agencies can be grouped into three distinct categories: Bad Robot, People Doing Dumb Sh*t with the Robot, and Threats to the Creative Class.
The Bad Robot argument suggests that AI, ChatGPT and LLMs possess inherent flaws, problems and negative externalities in and of themselves, regardless of who is using them or how they are being used. There is a fatalistic quality to these concerns, but that doesn’t make them wrong, nor should they be dismissed on the grounds they originate from a narrow set of beliefs. The main Bad Robot concerns include:
Environmental Impact: This concern gets mentioned a lot since AI-powered tech in general and LLMs in particular are compute-intensive applications and require a lot of energy to train and run, thus suggesting the carbon footprint of training an LLM can be significant.
Bias: LLMs can perpetuate and amplify biases that exist in the training data, which can have harmful consequences. The risk that LLMs amplify existing biases and inequalities in society, as they may perpetuate and reinforce patterns of discrimination and exclusion.
Copyright: Many posts warn of potential risks posed by AI from its use of copyrighted material without attribution. Note: I did ask ChatGPT about the risks of copyright infringement arising from use of the service. See sidebar below.
Privacy: LLMs can be used to extract personal information from text, which can compromise privacy. The challenges posed by LLMs to privacy and data protection, as these models rely on vast amounts of personal data to operate effectively.
Concentration of power: A small number of tech companies currently dominate the development and use of LLMs, which can concentrate power and influence in their hands.
Security: LLMs can be vulnerable to hacking and other security threats, which could have serious consequences if they are used in critical systems such as healthcare or transportation.
Transparency: The complex algorithms used by LLMs can be difficult to understand, which can make it hard to hold them accountable for their decisions. Dependence on large amounts of data: LLMs require vast amounts of data to train effectively, which can create a barrier to entry for smaller organizations or those without access to large datasets.
These concerns are largely grouped around the potential for LLMs to be used for malicious purposes, such as generating fake news or impersonating individuals online, and generally making the internet and society a worse place than it is already.
Reputation Impact: This group of concerns comprise the risks of repetition harm to individuals, corporations and brands from inaccurate or biased responses generated by LLMs, particularly in public-facing functions, and not checked or vetted by a human being. These include the need for quality control and human feedback to ensure the accuracy and effectiveness of LLM-generated content.
Misinformation: LLMs can be used to generate fake news and other types of misinformation, which can have serious social and political consequences.
Time-wasting: Not a concern explicitly expressed by any of the authors, but one I’ve been thinking about myself involves tumbling down a recursive and reductive prompt writing rabbit hole only to emerge hours later with little or nothing to show for the time spent. There are already many constraints and demands upon our scarce cognitive capacity, and ChatGPT may at times fail the value/time metric.
The concerns here focus on the threat AI poses to members of the laptop or creative class. The argument centers on the potential of AI to set a new floor for content production leading to unrealistic expectations about ‘the time it takes’, what it costs, concerns about quality, and the potential of AI to replace humans in the creative process.
Poor Quality Content: The fact that AI can and does generate poor quality and superficial and irrelevant content should not be a surprise to anyone. AI can be a drain rather than an enhancement to creativity. Loss of certain aspects of the creative process. One consistent issue focused on using AI to write bylines and thought leadership for clients. Here’s how ChatGPT summarized several articles in this category:
The concerns raised by the authors of these articles are that using AI to generate executive bylines and thought leadership content poses several risks. The authors argue that using AI for thought leadership risks producing unoriginal, clichéd content that lacks the personal experience and insights that are inherently human. Therefore, the authors suggest that relying solely on AI for thought leadership is a recipe for failure and that human input is essential for creating valuable, original content.
Human Replacement: The potential for LLMs to displace and replace human workers in certain industries and occupations, leading to job loss and economic disruption. There are some tasks in the agency that can and should be offloaded to AI for which the technology is usefully and ideally suited–summarizing a large amount of writing, research and data from multiple sources, as an example.
Of the concerns posed about creativity, the content-quality and human replacement issues were consistent across the articles we analyzed. Building scroll-stopping content is a complex and uniquely human task for which agencies are ultimately qualified and capable. If there is a line in the sand about AI + PR, this is probably it. Whether AI is the knowledge economy equivalent of the spinning jenny remains to be seen.
Now to the benefits of AI, ChatGPT and LLMs as seen by authors at various agencies.
The benefits under this rubric include the potential for AI to help PR/comms professionals with a wide range of highly practical tasks, many of which are time consuming and tedious such as language translation, initial research, fact finding, question answering, summarizing large amounts of information and aiding in the creative process by giving humans more time to think and experiment.
Speed and Agility: ChatGPT is almost universally credited with an ability to help communicators respond more quickly to ideation, cultural trends or crises. AI's are noted for the ability to personalize messaging and content to individuals at scale and return complex summaries of vast amounts of information.
Translation: ChatGPT and LLMs are seen as able to help improve language translation, automate content creation, and enhance chatbot interactions.
Accessibility: LLMs can help make information more accessible to people with disabilities, such as those with visual or auditory impairments.
Efficiency: LLMs can automate tasks that were previously time-consuming and require human labor, such as content moderation and customer service.
Blue Sky Ideation: Some view LLMs as having the potential to enable new scientific discoveries by processing and analyzing vast amounts of data in various fields, such as medicine and climate science.
Enhancing Creativity: Far from destroying the creative process, many in PR/Comms see AI as a useful tool for generating thought starters that can jumpstart the creative process by quickly prototyping new and different ideas. AI can open up new avenues of creativity and art that were previously limited by human imagination because they can generate novel and creative content in a variety of formats that mix and re-mix poetry, stories, and music to create totally new. Example: In the voice of Dr. Seuss' Cat in the Hat, write the opening remarks Elon Musk should make to employees at the next Twitter town hall.
In conclusion, most agencies view the need to address ethical concerns, biases, and privacy issues associated with the use of AI to ensure that the potential benefits are realized without creating negative consequences. Overall, authors point to the potential of AI as a creative tool that can help enhance human creativity and provide new opportunities for individuals to optimize their workflows, teams to collaborate, and brands to engage with consumers.
Alright, that is a wrap. I hope you enjoyed this post. It was an interesting and fun project.
As an AI language model, ChatGPT operates by analyzing and processing vast amounts of text data, including copyrighted material. However, using copyrighted material without proper attribution can potentially lead to legal and ethical risks, including:
Copyright infringement: The use of copyrighted material without permission or attribution could be considered a violation of copyright law. Depending on the severity of the infringement, ChatGPT's creators and users could face legal action, such as a lawsuit for damages.
Plagiarism: If ChatGPT generates content using copyrighted material without proper attribution, it could be considered plagiarism, which is an ethical violation. Plagiarism undermines the credibility of the content and can result in negative consequences for the creators and users of the AI language model.
Reputational damage: The use of copyrighted material without proper attribution could damage the reputation of ChatGPT's creators and users. This could lead to a loss of trust from the public and other stakeholders.
Financial penalties: In some cases, using copyrighted material without proper attribution can result in significant financial penalties, such as fines or legal fees. These penalties could be costly for ChatGPT's creators and users.
To mitigate these risks, it is important for ChatGPT's creators and users to ensure that they have the proper permissions and licenses to use copyrighted material, and to attribute the sources of the material appropriately.
Crystal Woody takes the helm to drive the B2B tech PR agency and its clients in the AI, data, security, fintech, SaaS, and professional services sectors toward significant growth and success.
Ainsley Sheikali is charged with leading the expansion of Millwright’s US-based marketing services agencies - Actual Agency, Bolt PR, and, Warner Communications