The White House is hosting a social media summit this week -- but social media platforms, including Facebook, Twitter and Alphabet's YouTube, will be conspicuously absent.
Instead, the summit will feature a number of right-wing publishers, including Turning Point USA and PragerU, the Associated Press reported on Tuesday, while representatives from Twitter (TWTR) , Facebook (FB) and Alphabet (GOOGL) confirmed that they were not invited to the meeting.
At several points, Trump has expressed displeasure with what he regards as bias against his agenda on such platforms. Last fall, he claimed that Google's algorithm is "rigged" against him; in March, he tweeted that Facebook, Twitter and Google are "sooo on the side of the Radical Left Democrats."
While it's not clear what the basis is for Trump's claims, there is growing unease on both sides of the political spectrum that the companies that run major social media platforms are doing an inadequate job of moderating content -- political or otherwise.
YouTube, for example, recently faced a backlash for hosting extremist or other offensive video on its site, while Twitter and Facebook have faced criticism for their roles in spreading disinformation or other malicious content.
Facebook, for its part, has argued that it shouldn't be the sole arbiter on what types of content should be allowed to proliferate, and plans to institute an independent oversight board to oversee content moderation issues.
Bob Pearson, an advisor to the W2O Group who studies disinformation and other malicious activity on social media, echoed that sentiment.
"I think it's unfair to these companies to say 'you should be moderating every comment,'" he said. In a recent interview, Facebook CEO Mark Zuckerberg said that despite spending billions annually on safety and security issues, the company still needs help in weeding out bad actors.
While members of Congress have questioned tech executives on content moderation issues -- and are interested more broadly in regulating Big Tech via new privacy or antitrust enforcement -- no concrete policy proposals have emerged yet on how to deal with false or malicious content.
Pearson pointed out that bad actors, whether they are selling illegal goods, propagating extremism or spreading disinformation, tend to work across multiple channels. And for that reason, the concept of a "bad actor API -- where platforms contribute data on harmful content to a shared pool, rather than simply deleting it -- could be part of the solution to content moderation issues.
Pearson said that he believes such a concept will eventually come to fruition -- but that it won't be individual companies, which have little business incentive to open up their data logs, leading the charge. Rather, such a system would more likely come by government intervention.
"What the public and private sector have to do is put their rhetoric down and figure out how to work together," he added. "The government may decide to threaten legislation...if they ask everyone in industry to do it, they'll do it."