Updated from 10:34 a.m. with additional information.
Silicon Valley has once again found itself ensnared in a conflict over preserving free speech versus stamping out hateful content.
This weekend's violent protests in Charlottesville, Va. have reignited calls for some tech giants to take a harder stance against extremist content, including Facebook Inc. (FB) , Alphabet's (GOOGL) Google and Twitter (TWTR) . And while doing so would appease some people, it also threatens to anger those who believe it's outside of Facebook or Twitter's purview to police content on their platforms.
Google, Facebook, Reddit and GoDaddy Inc. (GDDY) had all taken action to curb content related to or promoting Neo-Nazi and white nationalist websites. And on Wednesday, Twitter appeared to have joined them, according to Recode.
Google on Monday said it was canceling the domain registration for The Daily Stormer, a Neo-Nazi news and commentary website, saying it violated the company's terms of service, according to a Google spokesperson. The spokesperson added that there were specific concerns about Daily Stormer's services inciting violence, but didn't elaborate further on what exact content violated the policy. Google canceled Daily Stormer's domain registration within four hours of the site's application, Business Insider reported.
Daily Stormer moved to register its domain with Google after GoDaddy banned the site, giving it 24 hours to find a new host. GoDaddy decided to ban Daily Stormer in response to a disparaging article published on the site about Heather Hyer, who was killed during the Charlottesville protests this weekend. Hyer, 32, was struck and killed when a car plowed into a crowd of counter protesters, which also left 19 people injured.
The hosting site was the first tech firm to boot the Neo-Nazi website, a decision that GoDaddy made because Daily Stormer had "crossed the line," according to CEO Blake Irving.
"We always have to ride the fence on making sure we are protecting a free and open internet," Irving told CNBC on Tuesday. "But when the line gets crossed and that speech starts to incite violence, then we have a responsibility to take that down."
Daily Stormer has since registered its website on a Tor client, which is an anonymous web browsing service, also referred to as the dark web. Running on a Tor client means that Daily Stormer won't have to rely on a domain registrar to host its website.
Reddit late Tuesday banned a forum calling for the "physical removal" of Democrats from society, saying that its users were posting content that incites violence, according to The Daily Beast. Over the weekend, the forum pinned a post to the top of its conversation thread that said the Charlottesville attacks were "ethical" and a "morally justified action."
More of What's Trending on TheStreet:
While several tech companies, including Airbnb, took relatively swift action with curbing inflammatory content related to the Charlottesville protests, Facebook and Twitter stayed silent, and people began to notice. The Daily Stormer article about Hyer eventually showed up in the "Trending" section of Facebook's News Feed and had amassed 364,000 shares by Tuesday morning.
On Tuesday, Facebook said it would be removing any shares of the Daily Stormer article, including any without captions, unless the caption clearly condemns the article or the Daily Stormer. The company said it also removed some Facebook pages associated with extremist groups, including Right Winged Knight, Right Wing Death Squad and White Nationalists United, according to BuzzFeed. The action came after Facebook COO Sheryl Sandberg on late Monday denounced the Charlottesville attack in a post on Facebook, saying she was "heartbroken" by the events.
On Wednesday, the Twitter account for The Daily Stormer appeared to have been suspended. Meanwhile, Twitter has continued to highlight its existing terms and conditions; it's unclear whether the company has removed any inflammatory posts or users that have glorified the Daily Stormer.
Tech companies' actions (or inaction) highlight the fine line they must walk when it comes to free speech. Private companies have the right to remove content they don't want served up on their platform, but they also don't want to be accused of censorship.
"This is one of those damned if you do, damned if you don't things, because short term, not banning the site would have put the firms under pressure as well," said Rob Enderle, president of technology research firm Enderle Group. "The social media sites...want to own their policies, but given this trend, they now will be under increasing pressure to more aggressively censor and every step in that direction is a nail in the coffin of the related service."
They also face the thorny situation of conflicting U.S. and European ideals on free speech. Free speech is closely protected stateside, but overseas, the European Union has taken a different approach and pressured Facebook, Twitter, YouTube and Microsoft Inc. (MSFT) to sign a code of conduct that requires them to remove illegal hate speech in less than 24 hours. German officials took the code of conduct a step further, passing a proposal that would fine Facebook and Twitter as much as $53 million if they don't remove hate speech and fake news within a day.
"The question is -- what rules do we want for internet platforms to take down speech?" said Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society. "That's a question that has come up with Facebook, but also Twitter and YouTube."
Facebook has generally been willing to remove objectionable content because it wants to preserve a certain kind of community, Keller explained. Users have applauded Facebook for this, but have also raised concerns that Facebook took down content that should have remained on the site, such as an iconic Vietnam War-era photo.
Ultimately, Keller said it's likely best for social media companies to leave it up to users to flag content that incites violence, is offensive or promotes hate, rather than relying on hired moderators to do so. For several years, Facebook and YouTube have employed thousands of human moderators to wade through and take down inappropriate posts and videos.
"There are a lot of people crying out for Facebook to do more, but it is asking a tech company to substitute its judgment for the law and take stuff down," Keller said. "It's been proven more effective to just crowdsource that responsibility to users."
Watch More with TheStreet:
- These Companies are Taking a Stand Against White Supremacy
- What Does Alibaba Actually Do?
- These 3 Things Would Make This Trader Give Up on Gold
- Amazon Could Have Gone on a Retail Shopping Spree Instead of Buying Whole Foods
Editors' pick: Originally published Aug. 15.