This column has been updated on Sept. 28 to note President Trump's criticism of Facebook and Mark Zuckerberg's response to it, as well as new reports about Russian Facebook activity.
With nearly every passing week, there seems to be a fresh controversy about how Facebook Inc. (FB - Get Report) or Alphabet Inc./Google (GOOGL - Get Report) either regulates or fails to regulate content on their massive platform, or what content is eligible to be monetized.
Though the near-term damage should be limited, the companies are facing very thorny situations. If they crack down too hard on certain types of content, they risk angering a subset of users. And if they don't crack down enough, they risk upsetting another group of users and/or advertisers.
Either way, Facebook and Google's actions are likely to yield calls for greater government scrutiny of two companies whose enormous and steadily-growing role in determining what content and ads consumers see is already making many uneasy.
The subject re-entered the media spotlight on Sept. 27, when President Trump tweeted that Facebook (along with certain media outlets) "was always anti-Trump." Later that day, reports emerged pointing to additional attempts by Russian operatives to use Facebook to spread misinformation and divisive political rhetoric. Facebook, for its part, is getting set to hand over to Congress about 3,000 ads it believes were purchased by Russian operatives.
Mark Zuckerberg rejected Trump's assertion in a Facebook post, declaring that his company aims to "create a platform for all ideas." He also insists that Facebook's data indicates that "our broader impact -- from giving people a voice to enabling candidates to communicate directly to helping millions of people vote -- played a far bigger role in this election" than misinformation campaigns.
On Sept. 13, Facebook outlined rules banning several types of content from being monetized on its platform -- whether via Instant Articles loading within Facebook's core app, "branded content" that a publisher is paid by a third party to distribute or the "mid-roll" video ads the company has begun running against professional content. Notably, Facebook says it will use both "automated and manual enforcement methods" to decide what content can't be monetized.
Jim Cramer and the AAP team hold positions in Facebook and Alphabet for their Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells FB or GOOGL? Learn more now.
Facebook's list of material that might not be eligible for monetization includes material related to "debated social issues." Particularly, content Facebook thinks is "incendiary, inflammatory, demeaning or disparages people, groups, or causes," or which it thinks "features or promotes attacks on people or groups." Content related to real-world tragedies might also be ineligible, even if its goal is to "promote awareness or education."
Other types of ineligible material include violent content, adult content and content that features inappropriate language or misappropriates children's characters. Facebook insists those who feel their content was unfairly flagged can submit an appeal, but adds its decision is final.
The move comes as Facebook -- despite attempts to deal with the problem via algorithm changes and content-flagging -- sees ongoing criticism over how its platform has been used to spread fake and misleading viral stories with a political slant. The company's recent disclosure that 470 accounts it believes to have been operated from Russia spent about $100,000 on Facebook ads focused on "amplifying divisive social and political messages" has yielded fresh scrutiny. And last year, Facebook got into hot water over perceived political bias by the editors responsible for its Trending Topics feature; it responded by putting an algorithm in charge of the feature.
Google's YouTube, meanwhile, has seen criticism swell in recent weeks of its growing efforts to keep ads from running against videos that could make advertisers squeamish. Much of the "demonetized" YouTube content involves material that's politically themed and/or features colorful language. Heightening the criticism: Videos are typically flagged by an algorithm rather than employees, and it can take a long time for a reviewer to go over a creator's appeal.
All of this, of course, is happening after many advertisers temporarily stopped buying YouTube ads after an expose found that ads from major brands were running against videos from extremist groups. Some conservatives have alleged that bias exists in YouTube's demonetization decisions. Google Search and Google News have seen similar allegations over the years; Google, for its part, has long denied that the algorithms powering the services have any such bias.
Bias allegations aside, some of Facebook and YouTube's recent moves are clearly aimed at calming advertiser fears about what kind of material their ads run against. Fears amplified by the fact that, compared with independent ad platforms, Facebook and Google often don't share much about exactly where an ad appears. And with regards to preventing the spread of fake/misleading content, extremist material, or material that appears part of a propaganda effort by a foreign government, a failure to act decisively invites fresh scrutiny from the media and the public, and potentially also politicians.
But considering how pivotal Facebook and YouTube's platforms are to the livelihoods of many publishers and independent creators, and how utterly dominant they respectively are in the realms of social media and ad-supported online video, it's easy to grasp how their recent actions can produce blowback, as well as some unintended consequences. Engadget, taking in Facebook's latest move, observed that some publishers "may choose to not post content about important but messy topics" out of a fear that ads can't be run against it.
And to the extent that Facebook and YouTube users feel that corporate decisions are impacting their ability to see content that they like, their actions can add to a swelling public backlash over the power wielded by tech giants. A backlash that some politicians are lending a sympathetic ear to. "The new corporate leviathans that used to be seen as bright new avatars of American innovation are increasingly portrayed as sinister new centers of unaccountable power," wrote BuzzFeed's Ben Smith in a column about growing calls for tougher regulatory scrutiny of the biggest consumer tech companies.
Short-term, this shift in sentiment isn't likely to have a big impact on Facebook or Google's top lines. Their core services are simply too dominant and pervasive for all but the most bitter of consumers to abandon them. Especially since most of these consumers aren't paying Facebook or Google a cent to use their services doesn't hurt.
But it's very much in Facebook and Google's interests to proactively respond to this backlash before it's too late. In addition to PR efforts, providing more transparency about their content decisions would help, as would allowing independent auditors to review them, and providing advertisers the option to overrule the demonetization decisions of their algorithms.
As Smith suggested, political backlashes toward ascendant new industries tend to happen slowly until they happen fast.
More of What's Trending on TheStreet:
- PayPal's Stock Has Blown Away Facebook and Google This Year for One Big Reason
- Microsoft's New Xbox One X Shows It's Done Trying to Please Everyone
- How to Invest Like Billionaire Warren Buffett
- A 401(k) Loan Is a Terrible Idea Until It Isn't
Editors' pick: Originally published Sept. 13.