In the latest in a years-long debate over the relationship between social media, advertising, and free speech, Twitter has apologized for letting ads target people based on their affiliation with hate groups. An investigation revealed that advertisers could target users if they posted about or searched for keywords, such as “transphobic,” “white supremacist,” and “anti-gay.”
Twitter collects data based on content users post, like, watch, and share. Advertisers can use this information and segment the audience based on attributes, controlling who can see an advertisement based on keywords. An investigation conducted by the BBC demonstrated that it was possible for advertisers to search for people under the term “neo-Nazi.” Additionally, Twitter provides advertisers with an audience size estimate based on the criteria the advertiser has selected.
BBC investigators used an advertisement saying “Happy New Year,” and then used keyword targeting to target three different unspecified groups. According to Twitter policy, ads are reviewed before publication, but the test ad went into the review stage and was approved and ran for a few hours before the BBC ended the advertisement. In total, 37 people saw the ad and two users clicked on the link, directing them to a news article. It cost £3.84, or about $5 to run the ad. “A campaign using the keywords ‘islamophobes’, ‘islamaphobia’, ‘islamophobic’ and ‘#islamophobic’ had a potential to reach 92,900 to 114,000 Twitter users, according to Twitter’s tool.” The same ad was placed before a different audience, targeting, “13 to 24-year-olds using the keywords ‘anorexic’, ‘bulimic’, ‘anorexia’ and ‘bulimia’.” It reached 255 users and was clicked by 14, although Twitter estimated it could reach up to 20,000 people.
Twitter stated that it had policies to prevent exploiting keyword targeting, but that these policies were not correctly applied. Twitter’s “preventative measures include banning certain sensitive or discriminatory terms, which we update on a continuous basis,” Twitter said. “In this instance, some of these terms were permitted for targeting purposes. This was an error. We’re very sorry this happened and as soon as we were made aware of the issue, we rectified it. We continue to enforce our ads policies, including restricting the promotion of content in a wide range of areas, including inappropriate content targeting minors.”
It is unclear if the users targeted for the investigation were interested in those terms because they identified as such, or for other purposes such as for research.
This is not the first time Twitter has grappled with the presence of hateful ideology on its platform. In November, a group of activists protested outside Twitter HQ advocating for the company to ban white supremacists from the platform; a petition was signed by 110,000 people. Other social platforms, such as Facebook have allowed ads that targeted certain groups or discriminated against groups.
Twitter has been working to change ads on its platform, most notably by removing political ads from its platform. Correcting the keyword discriminatory ad targeting will likely be the next step for the platform. Twitter has previously banned white supremacist and far-right accounts.