What brands are doing about harmful content
On January 22, in Davos, the World Federation of Advertisers (WFA), representing $97 billion spent by six agency holding companies on behalf of 39 advertisers, outlined a plan to “suffocate” harmful digital content by choking the flow of advertising money to those spreading it. The brands represented by WFA include the likes of Unilever, P&G, Mars, Adidas and Lego. In addition, the effort includes Google, Facebook and trade bodies.
This is the first initiative of the Global Alliance for Responsible Media (GARM), a working group the WFA founded last June. GARM’s blueprint aims to prevent advertising money from fuelling content promoting terror, drug use and other damaging behaviour. The plan is three-pronged:
- Define ‘harmful content’
- Enable brands and agencies to better control where their media spend is going by developing appropriate tools
- Establish measurement standards for marketers to spot quickly if their ads are popping up on the wrong platforms, demonetise and take down harmful posts
The concerns aren’t surprising; if anything, the move has come a couple of years late. The quality of content by and about brands is more relevant than ever, and brands are looking for ways to insulate themselves and their customers from harmful content because it has a serious business impact.
Be worried. Very worried
According to a study last year by the Trustworthy Accountability Group, 80% of US consumers would reduce or stop buying a product if advertised next to extreme or dangerous content.
Rocked by a series of brand safety scandals on YouTube, Facebook and Instagram that placed promotions of names like AT&T, Hasbro and Disney next to toxic videos, the marketing universe is scrambling for a solution. It wouldn’t be an exaggeration to say that the problem is now a crisis – 73% said running creatives adjacent to hate speech was most damaging for brand reputation.
The next strongest concern was pornography – 72% said brands should do more to prevent being associated with it. Roughly the same number – 70% – said brands should stop ads from popping up alongside violent content.
These concerns impact purchase decisions and consumers are placing the onus squarely on advertisers’ shoulders. About 70% said it was up to them to ensure their promotions don’t support dangerous content, while 61% said it was the agencies’ responsibility.
Deal with it
Within the industry, we commonly identify the following as harmful content:
- Spam
- Scams (remember the Nigerian prince who wants to give you millions in return for your bank details?)
- Posts supporting and inciting violence (we’ve seen an increase of these in India in recent times)
- Hate speech (again, India has had a lot of it recently)
But, the definition needs to widen. How do you monitor and shrink the space available for racism, gender bias, faith-based discrimination, etc? And where does free speech end and a reasonable safety standard begin? Whether you should be allowed to say what you want is not being debated but advertising money shouldn’t support such content. The quest is for marketers to create meaningful engagement that nurtures their relationships with their audiences.
The solution doesn’t always lie in expensive fixes. There’s a lot that can be done easily and economically:
- If you’ve messed up, acknowledge and fix it. Say sorry, commit to making it right
- Invest in a listening programme to catch spikes in negative posts early and engage with the aggrieved. That way, you can control the conversation about your brand
- Use digital platforms not just for customer relationship management, but also analytics. You could, for instance, see where you’re getting the most traffic from to better allocate ad spends or to spot a developing crisis
- Use testimonials from customers and influencers to boost your image
- Build a quality website
- Reward loyalty to make customers feel appreciated
The digital age has reshaped the way brands and audiences connect and perceive each other. Harmful content has the potential to destroy these bonds. The collective action through GARM is a step in the right direction to create a safer digital universe.