Facebook works on advertising theme controls

Facebook is creating tools to help advertisers keep ad placements away from certain topics in their news feed.

The company said it will begin testing “subject exclusion” controls with a small group of advertisers. It was said, for example, that a children’s toy company could avoid content related to “crime and tragedy” if it so desired. Other topics include “news and politics” and “social issues.”

The company said the development and testing of the tools would take “much of the year.”

Facebook, along with players like YouTube and Google Twitter, have been working with marketers and agencies through a group called the Global Alliance for Responsible Media (GARAN), to develop standards in this area. They have been working on actions that help “consumer and advertiser safety,” including definitions of harmful content, reporting standards, independent oversight, and the agreement to create tools to better manage ad adjacency.

Facebook’s news feed tools are based on tools that run in other areas of the platform, such as in-stream video or its audience network, which allows mobile software developers to provide ads integrated into the application aimed at users based on Facebook data.

The concept of “brand security” is important for any advertiser who wants to make sure that their company’s ads are not close to certain topics. But there has also been a growing push from the advertising industry to make platforms like Facebook more secure, not just near their advertising placements.

The CEO of the World Federation of Advertisers, which created GARM, told CNBC last summer that it was a transformation of “brand security” to focus more on “social security”. The bottom line is that even if no ads appear on or next to specific videos, many platforms are substantially funded with advertising dollars. In other words, advertising content helps fund all ad-free content. And many advertisers say they feel responsible for what happens on the advertising website.

This became very clear last summer, when a number of advertisers temporarily removed Facebook’s advertising dollars, asking it to take stricter measures to stop the spread of hate speech and misinformation on its platform. Some of these advertisers not only wanted their ads to stay away from hateful or discriminatory content, but they wanted a plan to make sure the content was completely off-platform.

Twitter is working on its own in-feed brand security tools, it said in December.

.Source