Technology

Facebook to reduce recommendations, warn users about groups that violated platform standards

Facebook on Wednesday announced updates aimed at reducing the reach of groups that violate platform standards amid scrutiny over its handling of misinformation and hate speech.

The update will include notifying users before they join a group if the group has allowed posts that violate Facebook’s community standards. Users will be prompted with the notification and can choose to “review” the group or join anyway. 

Facebook is also expanding its policies around not recommending certain groups. Facebook announced earlier this year that it would remove civic and political groups, as well as newly-created groups, from recommendations in the U.S. 

Now, when a group starts to violate Facebook’s rules, the platform will start showing them “lower in recommendations,” Facebook’s VP of engineering, Tom Alison, said in a blog post

The change “means it’s less likely that people will discover” those groups, Alison wrote. 

Facebook’s existing policies already call for removing groups entirely that repeatedly break the platform’s rules.  

The social media giant will also require group admins and moderators to temporarily approve all posts when a group has “a substantial number of members who have violated our policies or were part of other groups that were removed for breaking our rules,” according to the blog post. 

Additionally, if a user has repeated violations in groups, Facebook said it will block them from being able to post or comment for a period of time in any group. That user will also be blocked from inviting others to any groups or from being able to create new groups. 

The update comes just one week before Zuckerberg will testify in front of the House Energy and Commerce Committee at a hearing centered on misinformation. Twitter CEO Jack Dorsey and Google CEO Sundar Pichai are also scheduled to testify. 

Democrats have criticized Facebook and other platforms for not taking a strong enough stance against misinformation and hate speech on its platform, especially after the deadly breach of the Capitol on Jan. 6 that was fueled by false claims about election fraud. 

The platforms are also facing scrutiny over handling of health misinformation about the coronavirus and the coronavirus vaccine as its rollout ramps up across the country.