Twitter Rolls Out New Comment Moderation Tools
4 mins read

Twitter Rolls Out New Comment Moderation Tools

Twitter has announced a set of new tools designed to provide people more control over what others say about them on the platform. Beginning today, across the world, the application released a number of features that enable users to moderate conversations and counter harassment on the app.

On of the new changes is the possibility to ‘bury’ replies to tweets. While users were limited to deleting their own replies or reporting other users to be blocked, users can now decide to hide certain replies from everyone. The hidden replies are not deleted, but are made less visible in the conversation thread by placing them behind one additional layer of access. This gives users more control in influencing the tenor of the conversations, of their tweets without entirely muting other participants.

Another new feature is the extended options for filtering comments. The latest update allows users to get new filters to automatically mute comments that contain specific keywords, phrases, or comments from accounts that match certain parameters. For example, a user can decide to hide comments from accounts that were created in the last 30 days or with < 100 followers since such accounts are usually those of trolls and spammers.

It has also rolled out a slow mode for comments which enables users to set a time limit between when comments can be made on their tweets. This is meant to allow for better reasoning as opposed to people giving quick responses during an argument.

Twitter has introduced the “trusted replies” for the users, who passed the verification or have many followers. This makes it possible for users to specify some accounts that their replies will always be pinned to the top of the discussion, which can be useful for pointing out some valuable views.

Besides these user-driven options, Twitter has enhanced its filters for automatically identifying and handling abusive or spammy tweets. The company explains it has fine-tuned the AI to recognize less overt types of harassment and organized groups of trolls.

These new features are arriving for Twitter at a time the network is still dealing with criticism over the rise of abusive behavior. The company has been compelled to take action on matters of harassment and misinformation, as well as the question of free speech and censorship.

Initial feedback on the new tools has been mostly positive, with most users welcoming the new level of control over their comment sections. Still, critics of free speech have cited that the features could be used to suppress real criticism or build echo chambers.

Twitter’s head of product, Kayvon Beykpour, responded to these concerns in a statement, “Twitter is an open platform where our mission is to provide people with even more control over the conversation. We believe these tools help to give users the right level of control while maintaining the variety of opinions.

Nevertheless, the company has commented that it will be paying close attention to the effects of these new features and calibrating its actions accordingly. They have also shared their intentions of giving out more granular information on how moderation decisions impact ways users interact with their tweets.

While users start to engage and interact with the new instruments, social media specialists already trying to identify how those changes might affect Twitter conversations. While some of it suggests that the features could reduce the amount of incivility, others are concerned with possible side effects.

These new moderation tools are being introduced as part of Twitter’s ongoing push to make the conversations on its platform healthier. The company has also been developing additional features for reading articles before sharing them and for making people reconsider before tweeting something abusive.

Twitter’s latest update is another proof of the fact that as the social media advances, people are given more control over what happens to them on the web. Whether these tools do make the environment on the platform more positive remains to be seen, but there can be no doubt that they are the latest instalment in the ongoing narrative regarding the moderation of content and the policing of behaviour online.

Leave a Reply

Your email address will not be published. Required fields are marked *