AI & RoboticsNews

AI ethics for ad professionals: 10 rules of engagement

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


AI isn’t coming to the workplace. It’s already here. Many of us already use tools that have AI under the hood, and both Google and Microsoft recently announced AI versions of their search engines. It’s AI power to the people — no specialized training needed. 

AI offers massive potential for advertising, particularly for email writing, researching, generating comps and writing social copy, as well as HR functions like hiring, reviews and more.

Proponents will tell you that AI in the workplace will take care of rote tasks, freeing us up to connect with other humans, be creative and relax. Detractors will remind you that AI could amplify bias, expand surveillance, threaten jobs and cause a whole host of other issues.

Both groups are right. AI is a tool, and what happens next depends on how we use it. Unfortunately, the regulatory landscape hasn’t accelerated at the pace of technology. This mostly leaves it up to us to make choices about how to use AI. In my role in brand strategy at a creative agency, I’ve already seen people debating these choices: Is it okay to use ChatGPT to write a peer review? What about generating AI mockups for a presentation? 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

We urgently need to define the etiquette around AI in the workplace. There are dense pieces of AI regulation and ethical codes for engineers, but we lack easy, accessible guidelines for the white-collar professionals who are quickly adopting these tools. I want to propose the following guidelines for the workplace use of AI.

10 rules for ad professionals using AI at work

1. Disclose the use of AI

A litmus test of whether you should be using AI for something is whether you’d be comfortable admitting it. If you have no qualms (“I generated stats for our report”), it’s a better use case. If you’d be embarrassed (“Hey mentee, your performance review was written by ChatGPT”), it’s a good indication you shouldn’t. People will have different tolerances, but being transparent will help us openly discuss what’s acceptable.

2. Be accountable

AI has a reputation for “hallucinating,” essentially auto-filling false information. Google Bard recently gave an inaccurate response in its public demo, and Microsoft Bing came under fire for “gaslighting” users. Whether it’s factual inaccuracies or badly-written emails, we cannot turn AI mistakes into someone else’s problem. Even if it’s the AI’s work, it’s our responsibility.

3. Share AI inputs

With AI, you get out what you put in. Being transparent about inputs will help us all learn how to best use these tools. It will also help us resist the temptation to ask for blatantly biased outputs (“Tell me why Millennials are selfish”) or to use AI to plagiarize (“Give me a picture in the style of Kehinde Wiley”). Transparency encourages us to only engineer prompts we’d be proud to show off.

4. Seek context

AI is very good at retrieving and simplifying information. For those of us whose jobs involve research, this can eliminate the process of sifting through dozens of sites for a simple answer. But it can also eliminate complexity. We run the risk of ceding power to an invisible authority and getting back summaries rather than nuanced perspectives. We must supplement simple, AI-generated outputs with our own research and critical thought. 

5. Offer system transparency

As companies use AI to make more decisions, people have a right to know how systems generate their outcomes. The GDPR requires companies disclose “meaningful information about the logic involved” in automated decisions, but the U.S. lacks such protections. If a company uses an AI program to recommend raises and bonuses, employees should know what factors it considers and how it weights them.

6. Provide recourse

One company came under scrutiny after allowing an AI-based productivity tool to fire 150 employees by email with no human intervention. The company later said it would manually review each employee’s case. We need to be able to challenge AI outcomes, not assume it to be “all-knowing,” and get access to a human-led system of recourse. 

7. Audit AI for bias

One major criticism of AI is that it can amplify bias. ChatGPT has been known to write “wildly sexist (and racist)” performance reviews, even when given generic inputs. There is a record of racial and gender bias in AI-powered hiring tools, which are often trained on datasets filled with human bias. Companies must regularly audit their tools, and individual users need to be diligent about bias in outputs. 

8. Reevaluate time

Another risk of AI: We spend less time around humans and more time with machines. If AI creates efficiency, what are we filling our newfound time with? Instead of defaulting to more work, we need to fundamentally rethink this new bandwidth. The most meaningful use of that time might be connecting with colleagues, chasing a moonshot creative idea, or simply resting.

9. Prioritize humanity

There will be times when AI offers gains in efficiency at a cost to human dignity. There are companies that have implemented AI-powered monitoring where workers are not allowed to look away from a screen. Some ad agencies are already using AI to replace visual artists. I would implore leaders to prioritize human wellbeing for purely ethical reasons, but companies may also find there are tangible benefits to taking the high road, just as companies that pay higher wages often benefit from more stable and experienced workforces

10. Advocate for protections

The vast majority of leaders already plan to use AI to reduce hiring needs. AI models continue to learn from the works of unpaid creators. And most of us do not have the power to fight bias in these tools. There are many small things we can do to use AI more ethically in the workplace, but ultimately we need codified, structural change and elected leaders who promise to build a stronger regulatory landscape.

The road ahead for AI in advertising

Just as the internet changed what it meant to work in advertising, AI is about to radically upend many functions of our jobs. There will be benefits. There will be drawbacks. There will be changes we can’t even imagine yet. As AI exponentially advances, we need to be ready for it.

Ethics is a subjective topic, and I don’t propose this list as a set of commandments etched in stone. My goal is to open up a dialogue about how the ad industry can harness the incredible power of AI while mitigating its risks. I hope that agencies and individuals take up the mantle and start hashing out what we want responsible AI adoption to look like for our industry. 

Hannah Lewman is associate strategy director at Mekanism.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Hannah Lewman, Mekanism
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!