Facebook had been working on its new features and policies for some time because of the rising of number racist and offensive behavior, people were waiting for some kind of update from social media platforms to combat these behaviors such as hate speech, religious discrimination, racism. Finally, on the 14th of August, 2019, Facebook Company announced that it is updating privacy settings for groups; also, they are working to better moderate bad content that breaks the platform's rules. Click here to know about Facebook accounts.
The vice-president of Facebook's engineering area, Tom Alison said in a statement.

"Being in a private group does not mean that your actions should go unchecked."

The platform is renaming its confusing secret, close and public group settings to the slightly more specific public and private settings, with options to make private groups hidden to non-members or visible to them.
The part of an ongoing effort to improve group safety through features like giving members the option to see the history of groups or preview the posts of the group they are being invited to or admins more moderation tools to preview its content before let it post by others.
The new settings of groups are also a part of the SCI (Safe Communities Initiative), that the Company started two years back, to monitor and detect inappropriate content in Facebook groups.
The recent announcement comes in the wake of findings that secret Facebook groups have been acting as a gathering place for offensive and racist activity.
One example happened earlier last month when ProPublica found a group called "Border of Patrol Agents" secretly joking about migrants' death.


Closed groups: will only let current members view the group content and see other members. It will be labeled as private but visible groups.

Secret groups: are hidden from the search bar, but will still require an invitation to join. It will be labeled as a private and hidden group.

Facebook The company says it uses Artificial Intelligence and Machine learning to "proactively detect inappropriate content before any user reports it, and sometimes before any user sees it."
The flagged content will be reviewed by humans later to see if it violates the Community Standards of Facebook. But we can call the system "flawed" if hate ad offensive speeches are still being carried by those groups.

In April 2019, Facebook updated its policies to hold admins to a higher standard; it was committing to penalize the overall group if moderators approve posts that break the rules of the Facebook platform.

A new tool called "Group Quality" will be given to group admins, to make sure that they will be held responsible for their group members' behavior.
This tool will give them the ability to overview content that violates Facebook Community Standard.

They will also have an option to share those rules that members break when they mute members, remove comments or decline pending posts.
Facebook has indeed done a great job by updating its privacy settings but rather than helping to curb the rise of hate speech and misinformation some people worry, the changes might push questionable contents further underground.

The CEO of the digital investigations consultancy Mimetic said in a statement,
"Facebook should give independent researchers to access similar tools so they can view insights into what is happening in a private group."

Facebook Officials said the new features will find objectionable content if admins fail to flag it.
The group quality tool will alert admins to potentially offensive and violating posts and give more contexts into why that specific post is being removed.

A product group manager on Facebook Groups said,
"There is a misconception that private groups go unchecked just because they are not visible to the public but in reality, our proactive detection technology can find violations even if no one in the group reports it. We have also barriers in place to catch bad posts from people who have broken our rules before and are holding admins more accountable for what their members share."

The new privacy rule also relies on the admin of the group, who police content in these micro-communities, to help flag offensive and questionable contents.
Facebook pushed its communities and groups features in a series of advertising campaigns that was first started in early 2019. But the rise of groups also allowed insulated communities to spread hate speech and fake news to millions of users on Facebook, The Facebook Company has faced repeated calls to curb the rise of anti-vaccination groups, where all users can easily post hate and misinformation about public health repercussions.

A member of the health research team at Media Matters, Sharon Kann, said in a statement,
"Facebook has changed rules around groups several times in recent years, and new policies may represent a step in the right direction, but it remains to be seen if they will make a difference. We are hopeful that this change means Facebook is taking seriously the spread of misinformation and harassment on the platform--something we know has continued in spite of other policy changes”.

Facebook has indeed done a great job by updating their privacy policies making it more active than ever, because of increasing hate and offensive speech we all are in a dire need to protect our community from this kind of behavior. These kinds of behaviors not only degrade the targeted group of people or an individual but it is also a grave danger thing for the peace of our world.