AI, Ethics, and the Human Touch in Digital PR and SEO

Jurassic Park came out 30 years ago, and besides inspiring a generation’s love of all things dinosaurs, it posed a very powerful ethical question about the drawbacks of technological leaps and scientific innovation without consideration of the ethical dimensions. As dinosaurs ravaged the would-be tourist destination of Jurassic Park, Ian Malcolm mused, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” 

While Artificial Intelligence (AI) doesn’t pose an immediate bodily threat to humans like genetically cloned dinosaurs, it does have a considerable number of ethical issues that could impact the lives of American workers, creators, and everyday users. It is important in this nascent era of automation and giving over workstreams to AI that we consider its ethical dimensions and make sure that AI is integrated into society in a way that benefits all.

The Ethics of Data Sources in AI

A robot sitting at a computer.

Two of the most popular AI platforms are ChatGPT and DALL-E, both creations of the artificial intelligence company OpenAI.

Both were trained by inputting millions of files of text and images with descriptions into their programming to create a highly sophisticated probability machine. There have been whispers of controversy around this, particularly on the side of image-generative AI data sources: evidently, some of the images create racial bias and have also infringed on copyrighted art.

There have been cases as well of unauthorized documents and imagery used in the source data to train these platforms, not to mention questions around what happens to the chat transcripts these AI platforms use to track the different asks placed upon the programs.

A major lawsuit launched against OpenAI, Github, and Microsoft cited wholesale plagiarism of open source code in Github’s Copilot AI system training data. AI has a huge source of data in the internet, but without careful monitoring, issues like this will continue to arise. 

The Ethics of Misinformation

An AI generated image of a robot.

As AI has swept the news, while generally, people are lauding it as a technological revolution, occasionally its deceptive capabilities come to the fore. 

It is well documented that AI platforms ChatGPT and Bard are both prone to factual error that they deliver with the same confidence one would expect from a correct answer. When prompted on who would win the Chicago mayoral election in 2023 more than a week before the actual runoff, both engines coolly replied that Paul Vallas won and was the next mayor. This is obviously untrue (Brandon Johnson went on to win the mayoral election). Without critical inquiry from the user, accidental AI misinformation like this can run rampant. 

On a more global scale, recent news stories of Midjourney-generated AI pictures of the Pope in a Balenciaga puffer coat hoodwinking the internet showcase the potential pitfalls of such powerful AI. While Midjourney has content guidelines, other platforms might not, and the case for falsifying pictures of people in a variety of compromising situations can become clear.

Not only that, but artists are up in arms about AI’s infringement on their craft: a photographer recently submitted a photo that won an international contest, and refused the prize when he revealed it was an AI image.

The ethics of AI art is a tricky subject when considering both the creativity required to generate a compelling image as well as the potential plagiarism or displacement of practicing artists. Imitation art is very possible: how do we safeguard artists and other creatives who have put time and effort into their craft to create genuinely compelling pieces of work?

Some are asking the question “Is AI art ethical?” – but the better question is to ask how AI can be used ethically in a way that honors its creativity AND the craft of traditional artists. 

Another tricky ethical dilemma is that of AI passing as a human interlocutor: a recent teletherapy provider called Koko announced it had used AI to counsel users without informing them to see how the AI would do. As expected, this caused a scandal, given that people genuinely seeking mental health counseling in possible times of crisis might not consent to being experimented upon.

People rated the responses highly until they found out they were AI-generated, and many pointed out that this study likely fell under the purview of Institutional Review Board (IRB) guidelines. This could result in serious consequences for the platform.

Other journalists have showcased how ChatGPT has counseled them to leave their spouses (Bing), fake academic papers (ChatGPT), and that internal voices called Google’s Bard a “pathological liar” and counseled the company to delay the release. CNET has been called out for using AI-generated articles and not disclosing, and even finding errors in their articles. Clearly disclosing when AI is and is not used is crucial to stay in a place of ethical clarity about this new technology; tools like GPTZero can detect AI when there is suspicion.

[Related: The Digital Marketing World Reacts to ChatGPT]

The Ethics of Value Judgment (or lack thereof) in AI

Noam Chomsky recently penned an op-ed in the New York Times drawing attention to the fundamental mechanisms of AI as used in ChatGPT and others: they are simply incredibly sophisticated prediction machines.

As such, there is no explanation or broader line of contextual thinking AI is currently capable of pursuing. Any question prompted to the chatbots on value judgments or moral lines will return an answer in which the AI bot refuses to take a stance: everything from preferences to questions of morality are treated the same and “debated by many ethicists.” 

This poses an important question: does it make sense for humans to depend on AI in questions of morality? Is this commitment to impartiality instead its own type of morality? Additionally, AI platforms only disclose that they cannot make value judgments if prompted directly; there is an easy-to-see future in average users unaware of the basic mechanics of AI and assuming intelligence closer to the so-called “General Intelligence” AI aims for when machines effectively have a mind of their own. 

If AI is already trending in an amoral direction, it doesn’t necessarily bode well for future iterations of General Intelligence: this is the early stages of the plot of Terminator. This is the partial impetus for the recent open letter to OpenAI from a variety of figures in tech advocating for halting the development of GPT4. A different science fiction canon might be well worth looking towards: Isaac Asimov’s robotic universe, where three laws govern all robo-behavior: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

With these parameters in place, there might be room for an AI that can make basic value judgments around the idea of avoiding harm to human beings. Of course, in this messy and complicated ethical universe, defining things like “harm” can be more nuanced than is ideal for a computer program inclined towards boolean thinking.

Ethical Consideration of Jobs and AI

A robot painting a photo of a robot

With any major technological innovation comes fears and anxiety about how this will affect work: while so many developments make work easier, this often results in layoffs and the elimination of jobs as opposed to an easier life for workers. What jobs will AI replace? CAN jobs be replaced by AI? Should they be? 

What is the responsibility of businesses looking to maximize profit but maintain ethical clarity? Every business has its own core values and attitudes towards labor: but there is a case to be made for integrating AI into extant workers’ lives as opposed to assuming AI can simply replace work.

For instance, marketing agencies might be able to use AI to generate contract language faster, or perhaps even automate some workflows in admin to free up valuable creative energy. Will marketing jobs be automated? Where is there space for creativity when the act of work is defined differently? This technology has the potential to be a game changer across sectors, and when used in concert with already experienced workers, AI can level up any brand.

The flip side of this, of course, is the fallout: AI image generation might result in fewer freelance jobs in fields like copywriting or graphic design, and workers might also try and pass off AI-generated work as original and authentic to their own creative process.

Alternatively, workers might also bill for time they didn’t spend actually working: how are businesses to navigate this? Obviously, honesty is crucial in any working relationship, but how do businesses ethically navigate reallocating labor toward different ends? Perhaps one answer is working collaboratively with the workers, and another might be playing to worker strengths by using AI to take away administrivia and work that is impersonal and easily replicable. 

To ground this abstract consideration in our own industry, take SEO: while it’s easier to do keyword research and optimize content, AI text still has a style that needs to be edited out, and there is no replacement for expertise: this is an opportunity for SEOs to take higher level strategic positions, or perhaps spend more time diving deeper into data to make better-informed decisions.

Digital PR pros can spend less time doing rote work and more time ideating and going deeper into the ideas and strategies behind the campaigns to create even bigger impact campaigns. 

Where do we go from here? 

Careful consideration of the many ethical dimensions around AI’s background, implementation, and ongoing development is crucial across industries. While an incredibly powerful tool and assistant, it is easy to both overestimate AI’s abilities and soundness of judgment and also deceive others or otherwise introduce misinformation into a news sphere already saturated with false narratives. Many jobs hang in the balance, and notions of creativity and morality are also at stake in how businesses ultimately adopt AI. 

Some have advocated for an international AI code of ethics to safeguard– but having a code is one thing, while adhering to it is quite another. 

It’s a brave new world- and one well worth exploring; we just need to keep moral clarity while we do it. 

Read more from our blog