A new report from  The Guardian shows how Facebook (FB) - Get Reportis trying to curb the spread of violent, abusive and objectionable posts on its sprawling social media network, which now has almost two billion users worldwide. Leaked internal documents detail the complicated and, at times, controversial techniques used to monitor content while preserving free speech, causing some critics to wonder whether Facebook can effectively keep watch over the overwhelming volume of posts on its site. 

It's an argument that's swelled over the past several months, beginning on a lesser scale with Facebook's fake news issues, then picking up steam when a Wall Street Journalreport revealed that Facebook Live may have broadcast at least 50 acts of violence, including murder and suicides, as well as the beating and torturing of a mentally disabled teen. In April, a Thai man filmed himself killing his 11-month-old daughter in two video clips posted to Facebook before he committed suicide. 

Facebook shares were trading flat at $148.11 on Monday early afternoon.

Facebook has established numerous rules and guidelines to try to remove or put warning labels on the inappropriate content before it's viewed by users. The result is that some material depicting even abuse are allowed on the site under very specific conditions -- typically if it can help identify and rescue the victims. Users are also allowed to live stream attempts to self-harm because Facebook doesn't want to censor people who are in distress. In training manuals sent to content moderators, Facebook justifies the process by saying that some posts shouldn't be taken down if they can raise awareness about a certain issue, according to the  Guardian

"Videos of violent deaths are disturbing but can help create awareness," the company wrote in an internal memo. "For videos, we think minors need protection and adults need a choice. We mark as 'disturbing' videos of the violent deaths of humans."

In one case last September, the policy became particularly thorny when Facebook removed the iconic "Napalm Girl" photo, which depicts a naked girl fleeing a napalm attack during the Vietnam War, from its site, noting that it violated its community guidelines. Facebook later republished the image, saying at the time that the image should be shared to convey its historical importance. 

The Menlo Park, Calif.-based company has moved to smooth out content issues by hiring 3,000 additional moderators, on top of the 4,500 it already employs for such tasks. In a post on Facebook, CEO Mark Zuckerberg said his company aims to make it faster for moderators to determine which posts violate community standards, but the Guardian noted that moderators have said they're at times forced to make decisions in as few as 10 seconds.

Moderators have also voiced concerns about the inconsistency and confusing nature of Facebook's content guidelines, specifically on issues related to sexual content. For example, some images of non-sexual physical abuse of children don't require removal if there isn't a "celebratory" element or if it may help children be rescued. 

Facebook is a holding in Jim Cramer'sAction Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells FB? Learn more now.

"We allow 'evidence' of child abuse to be shared on the site to allow for the child to be identified and rescued, but we add protections to shield the audience," the company said in an internal training manual.

Moderators have said they often found themselves inundated with mountains of obscene and extreme content -- not just on Facebook, but also Alphabet's (GOOGL) - Get Report YouTube, Snapchat (SNAP) - Get Report and Twitter (TWTR) - Get Report -- which can cause mental trauma. Sarah Roberts, an assistant professor information studies at UCLA, has studied how the job can have lasting affects on moderators' psyche. Many moderators are contracted to work with Silicon Valley companies and sometimes don't have access to adequate healthcare, Roberts noted.

Two Microsoft (MSFT) - Get Report content moderators recently sued the tech giant claiming that Microsoft didn't offer enough medical support for the employees after they'd been traumatized by photos and videos of "indescribable" content. 

For its part, Alphabet's (GOOGL) - Get Report YouTube recently added more ad quality raters, whose job is to flag obscene, hateful or inappropriate content that may violate advertiser guidelines, in the wake of a recent boycott by advertisers. Similar to Facebook and other companies' content moderators, the YouTube contractors also complained of the stressful and sometimes traumatizing elements of the job. 

Facebook has said it will rely less on humans, instead building stronger algorithms to analyze, detect and remove objectionable content from its site. So far, that doesn't seem to be enough for critics who say Facebook, Alphabet and  Twitter (TWTR) - Get Reportmust amend their community guidelines, lest they be risk being fined by European Union. Facebook, for its part, has said that it feels responsible for how the site's technology, such as the News Feed, is increasingly being used. 

Zuckerberg has also commented on the issue on several occasions, including in the 6,000-word manifesto he published earlier this year in which he noted how Facebook can actually be a force of good in preventing issues of self-harm and other violence. 

"When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life," Zuckerberg explained. "...Going forward, there are even more cases where our community should be able to identify risks related to mental health, disease or crime."