In recent years, Twitter has been widely criticized for not taking steps to avoid the hateful content available on its platform. Jack Dorsey's company policies do not allow users to target other people with obscenity of any kind or degrading content but they can do it in public.
In 2017 Twitter started shutting down profiles that published content that the company said was sensitive. In addition, it has also launched a series of Temporary blocks of tweets from people are considered controversial, only fans of that account will be shown. However, it looks like it's enough and you want to take another step.
Twitter is testing a new feature on iOS will warn users to use profanity in a tweet you are about to publish, inviting you to review it, but it will never be forbidden to publish it, urging us to review it.
When things start to get heated, you can say things you can say. To allow you to reconsider the answer, we use a standardized test in iOS that can give you the option of checking your response before it is published when using potentially harmful language.
– Twitter Support (@TwitterSupport) May 5, 2020
When things get hot, you can say things you don't want to say. To allow you to reconsider the answer, we use a standardized test in iOS with a notification that gives you the option to update your response before publication if you use potentially harmful language.
As Sunita Saligram, Director of Twitter's Global Policy, says:
We try to encourage people to reconsider their behavior and re-think their language before publishing, because they are often arrested at any time and can say something they regret.
According to Saligram, the new method will be tested in the next two weeks in English only intended to violate Twitter policies non-repetitive. If this temporary measure is successful, it will be used globally and in all languages, although it will be a slow and difficult process.