Technology

Twitter testing feature that asks users if they want to edit ‘hurtful’ language in replies

Twitter announced Tuesday it is running a test for a tool that would allow users to revise replies before they’re published if they contain “harmful” language.

“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” the tech platform said in a tweet. 

Under the new tool, users will be alerted when they hit “send” on a reply if their message contains words that are similar to those in other posts that were reported. Users will then be given an option to change their response before their reply is published.

The test is just the latest development in Twitter’s efforts to respond to pressure to tackle hateful posts on its platform. Current monitoring is done by users who flag offensive posts and through screening technology. 

Twitter said in its yearly transparency report that it took action against nearly 396,000 accounts under its abuse policies and over 584,000 accounts under its hateful conduct policies between January and June of last year, but critics have said more stringent efforts must be made.

“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview with Reuters.

The test will last a few weeks and will only target English-language tweets worldwide.