您的当前位置:首页 > 产品中心 > Twitter tests a way to minimize the voices of trolls 正文

Twitter tests a way to minimize the voices of trolls

时间:2024-09-21 23:38:39 来源:网络整理 编辑:产品中心

核心提示

Twitter has put its trolls on notice.On Tuesday, Twitter released promising preliminary results from

Twitter has put its trolls on notice.

On Tuesday, Twitter released promising preliminary results from a test of its new proactive troll filtering tactic. It wanted to see if filtering (but not deleting) content from accounts that exhibited "trolling behavior" could make Twitter more of a platform for conversation and sharing, less an adversarial cesspool.

SEE ALSO:A new study reveals how Russian trolls manipulated our Twitter conversations

In March, CEO Jack Dorsey announced that Twitter would work to measure and then improve the "conversational health" of the platform. The initiative came in response to the revelations of how Russian troll farms had used the platform to inflame the American public. Dorsey's tweet storm announcing the initiative also seemed to say that Dorsey was taking an earnest look in the mirror at what the platform he created had become, and what it had done to the world. That was a welcome change of tune for the same company that had just a month prior continued to obfuscate Russian trolls' use of Twitter.

Dorsey said that he wanted Twitter to undergo something of a reckoning, in which it had to actually define what it wanted "healthy conversation" to be. At least publicly, the definition of "conversational health" is still something Twitter is working out; in April, David Gasca, Twitter’s product manager for health, said that Twitter had received 230+ responses to its March Request for Proposals on how to best define, measure, and then improve conversational health on Twitter.

But it appears that the experiment is already underway. For its first improvements to "conversational health," Twitter decided to see if it could reduce the amount of "disruptive behavior" by trolls.

"Some troll-like behavior is fun, good and humorous," Gasca, and Del Harvey, VP of trust and safety, wrote in a post announcing the test. "What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search."

To do so, it says it found a way to identify accounts that exhibit trolling behavioral markers. These markers include not having email verified accounts, a high volume of tweets directed at people the accounts don't follow, and more.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

It then delisted the content posted by these accounts from search. And, since so much trolling takes place in the responses to tweets, responses created by these accounts would only be visible by clicking the "see more responses" option.

Apparently, in its testing markets, Twitter saw a 4% drop in abuse reports from search, and 8% drop from comments. That's nothing to sneeze at!

The post announcing the test explained the key challenge: how to minimize the voices whose only aim was to inflame or bully, but who weren't actually posting content or behaving in ways that violated Twitter's terms of service. Or, as Gasca and Harvey wrote, "how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?"

Filtering rather than deleting content or suspending accounts essentially shrinks the microphone of trolls looking to stir up trouble. They can still post to their heart's content — so there's no "censorship" here — but the likelihood that people will see (and engage with) their content is just a bit lower.

Then again, Twitter trolls obviously don't agree that this isn't censorship. Ok.

Filtering based on behavioral markers is also a proactive tactic. This addresses a frequent criticism of social and digital media companies: that they reactto violate or inappropriate content, instead of preventing it in the first place. Specifically, some asked why so many Facebook group's ties to Russia weren't noticed earlier, since they contained obvious markers such as paying for ads about American protests in Roubles. And on YouTube, horrifying videos have made it onto the platform's kids channels, racking up thousands of views by kids before parents noticed and reported the content.

Of course, proactively preventing abuse without chilling amounts of profiling, or raising cries of censorship, is a difficult challenge. Even in this new experiment by Twitter, trolls could get wise to Twitter's behavioral flagging, and adjust their behavior to appear more organic. Mashable has asked Twitter if there are additional indicators not mentioned in the post, and whether Twitter will intentionally keep some of its markers private to avoid manipulation by sneaky, determined trolls. We'll update this post when and if we hear back.

Additionally, abuse reports can certainly reflect whether users are having a bad time on Twitter — but it takes a big, trolling push to get users to actually report an account, instead of just ignore it. It's not clear yet by what other markers Twitter might measure conversational health.

UPDATE 5/16/2018 1:00 p.m. ET:A Twitter spokesperson said that there are a large amount of signifiers that an account engages in trolling behavior. Because of the multiple signifiers, Twitter does not believe that sharing a few examples will harm its ability to identify trolls.