The Pitfalls of AI Accuracy: How to Avoid Costly Mistakes

AI is exploding in the digital marketing world, and despite op-eds across major publications sounding like harbingers of doom with regards to these tools, many are already learning how to harness the capabilities of ChatGPT and its ilk for their benefit and to work smarter, not harder.

However, as with any new technology– laser disks, the metaverse, etc., blind acceptance can lead to considerable future risk. In the case of AI, broad misconceptions as to how these chatbots work can result in a variety of AI pitfalls for the plucky digital marketer. 

In an attempt to waive off disaster, here are some risks of AI to work around and through for maximum benefit.

Spreading Misinformation- Intentionally or Not

It’s important to break down how AI platforms like ChatGPT and Bard work to understand where there is a risk for error. These are either LLM or LaMDA models: basically, incredibly sophisticated probability calculators that, instead of actually calculating any kind of value judgment or activity, they instead respond with the most probable answer. This means that it is extremely easy to assume more actual “intelligence” on the part of our AI overlords– which is a trap door just waiting to be trod on. 

For example, ChatGPT was trained on a fixed dataset that ended in late 2021. This means it not only has no knowledge about itself or its public perception, but it also has outdated information. If, say, you wanted to know who the most popular basketball player was, you’d be given 2021’s answer with a possible disclaimer. 

Chat GPT Screenshot

Additionally, while many are advocating to use AI to code, the code provided is not always correct. It is very important to fact-check any output AI provides to ensure accuracy and your standards of truthfulness.

Another type of AI misinformation with potential for exploitation is that of AI-generated deepfakes, whether by text or by image. Deepfakes are, loosely defined, fictional images depicting real people that are indistinguishable from actual photos. Recent examples include “photos” of Donald Trump doing a perp walk or Pope Francis wearing a puffer coat; since then the AI image generator Midjourney has paused deepfake creation until more robust content moderation can occur. 

When creating AI content, especially anything with image creation, it’s best to be as explicit as possible that images are AI-generated; whether it’s centering your campaign around AI imagery, thus making it obvious, or providing captions and alt-text that are explicit that the photo in question is AI in origin. 

AI Image

As you can see from the images above, it’s fairly easy to create AI images that are convincing deepfakes: these took about 10 minutes to create and each could present a compelling case for being the actual Pope Francis arriving at the met gala.

AI Can Lead to Broken Trust

ChatGPT and Bard are faceless chat platforms, and ChatGPT offers its API to a variety of apps and third-party use. This has already resulted in instances of deception of users.

Mental health startup Koko has received ample negative attention for its use of AI chatbots with users– without disclosing that there was no human behind the texts. Particularly given the population, lack of disclosure here is possibly illegal and almost definitely unethical, as those using the platform were seeking counseling for mental health difficulties. While there might be a future in which AI can aid the treatment of mental illness, it is of utmost importance to be transparent where there are humans– and where there aren’t. 

Claiming material written by AI and presented without human touch as the product of human work is both a disservice to the expertise and high-level thinking humans bring to their work and a disservice to the public, which will then begin to believe that certain fields are now made redundant because AI *can* do it all.

This poses not only an immediate risk in the form of loss of trust on the part of the public but the more long-term risk of jeopardizing entire fields of work to be outsourced to AI. 

With broken trust comes SEO consequences as well: Google prioritizes trustworthy content, and while current guidance from Google indicates that so long as the content is high quality it doesn’t matter whether it’s human or AI-generated, content passed off as human-created will diminish trust on the part of journalists and, eventually, Google, if any AI inaccuracies slip by and your content contains factual error. 

Related: ChatGPT and Local SEO: How AI Can Help and Hinder Your Strategy

With all this in mind, it’s better to be straightforward, intentional, and watchful when it comes to AI, or else the careful relationships and rankings you’ve built could become more like a house of cards. 

AI Image of House of Cards

The Risks of Plagiarism in AI Content

It’s time to return to the fundamental mechanics of AI platforms like ChatGPT and Bard. They are both large language models, with ChatGPT’s training data a fixed set and Bard’s an infinite set connected to the internet’s texts; this means that they most likely both have copyrighted material as part of their data set. This presents an ethical conundrum: what if you publish AI-enabled material and it turns out to be fully plagiarized? 

Even early iterations of Midjourney had the ghosts of “Getty Images” watermarks on some generations, leading to speculation as to where, exactly, Midjourney and similar AI platforms got a lot of its source images. While this has since been addressed, it still brings AI writ large into tricky ethical territory. 

Once again, the solution here is to edit and make AI content your own: tweak the texts, rework the particularly wooden sentences, and add your own elements to any images. This not only protects you from plagiarism risk, but also ensures that all content you created with the help of AI is much more likely to be protected by copyright law. 

Over-Reliance on AI is a Dangerous Game

There’s no doubt that AI is here to stay and so are AI risks. AI is disrupting how we work, and even how we think about work. A simple peep at LinkedIn shows hundreds jockeying for thought leadership positions on AI and how to use it to maximize your work. There is, however, a risk in leaving it all to our new robot overlords.

Over-reliance on AI can have both short-term and long-term consequences for any digital marketer.

In the short term, it can affect your overall sense of creativity or drive toward ingenuity. AI (being probability machines) will output that which they think is the most likely answer, so thinking outside of the box is fairly impossible.

Digital PR campaigns might become overly formulaic, provide incorrect data considering issues with the accuracy of AI, or even be unoriginal because innovation is not the strong suit of AI. While we’d love to simply plug in a few keywords and prompt ChatGPT to create an entire campaign, start to finish, that’s still impossible. New digital PR pros might become habituated to never truly brainstorming new concepts. 

Not only that, but there’s an existential threat at play as well: if companies, clients, and brands begin to think that the work of digital PR is able to be fully automated, agencies won’t have clients, and suddenly quite a few digital PR pros might find themselves without jobs as it’s assumed that AI can do their job for them, just as well.

It’s imperative to show the human touch is necessary for truly great work to occur– and to show that AI is an assistant that enables any PR pro to think more strategically and execute higher-level campaigns with their newfound free time. Otherwise, we might see wholesale loss of jobs from AI- not to mention a cascade of other effects. 

AI is a powerful tool…that has powerful risks associated with it. This is why it’s important to utilize it mindfully and not simply assume it is airtight and infallible.

Read more from our blog