free press facebook news feed algorithm

Editor’s Note: Emily Bell is founding director of the Tow Center for Digital Journalism at Columbia’s Graduate School of Journalism and a leading thinker, commentator and strategist on digital journalism. The views expressed in this commentary are her own. This is the next installment in the CNN Opinion series on the challenges facing the media, under attack from critics, governments and changing technology.

CNN  — 

At the beginning of November, the general counsels of Facebook, Google and Twitter appeared on Capitol Hill to answer questions about a destabilizing campaign of Russian meddling. Members of Congress grilled them on exactly how each of the platforms might have allowed malicious publishing from foreign actors aimed at influencing the US election.

free press emily bell headshot overlay

The widespread existence of fake news spread by bots and hyperpartisan trolls across social media only came to light as a result of the shock election of Donald Trump in the 2016 election. The congressional hearings this month revealed that the problem extended to potentially millions of American voters being exposed to political messaging and ads orchestrated by Russia.

While lawmakers consider what to do to curb the possibility of such interference happening again, social media companies have started to change their own content policies. This raises an urgent question as to whether our media environment needs more regulation and if so, who should do it, the government or the vast privately owned technology companies?

In response to intense public pressure over the past few months, the platform companies have started to tighten their own terms of use and change their targeting algorithms in order to combat both fake accounts and extremist hate speech. But when platforms flex their muscles in this way, are they restricting, or even censoring freedom of speech and censoring the free press?

This week, social media platform Twitter started deleting the “blue check” verification marks from right wing accounts such as that of white nationalist Richard Spencer. Biz Stone, one of the founders of the social network, tweeted: “Verification was invented to authenticate but became perceived as an endorsement. We should have acted earlier but we are now working on a new program.”

The removal of a blue check from a Twitter account hardly counts as censorship, although there were plenty of opinions from members of the far right on the platform to suggest they think it does. The episode marks just one of a number of changes we are likely to see from social platforms as they adjust their internal policies to control more of the content published on their sites.

Facebook recently experimented by removing all news sources from its news feed in six smaller markets: Slovakia, Guatemala, Serbia, Bolivia, Sri Lanka and Cambodia. The result was that traffic to the news sites plummeted, alarming American publishers about the possibility of these changes being extended to their market. Google also changed its algorithm for news to stop sites like 4Chan appearing at the top of news results with false stories as it did after the Las Vegas mass shooting. The change supposedly purged “fake news” culprits but also raised serious concerns about how Google and Facebook help determine what news people read and whether they should have that power.

News organizations are not the only people concerned about this type of exercise of covert power. In August, after violent far-right protests in Charlottesville, Virginia, the neo-Nazi website The Daily Stormer was essentially removed from the internet when a series of hosting and platform services, including Google, Amazon and Facebook, refused to support it. Cloudflare, a web security service that protects around 10% of internet traffic from hostile attack, also withdrew its services. Cloudflare chief executive Matthew Prince said at the time that his decision to remove the Daily Stormer from the internet was potentially “dangerous,” adding, “Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.”

Few people would argue that the Daily Stormer’s vile brand of content is missed from the internet, but Prince’s point remains the central concern of many legitimate journalism and news organizations. Their worry is that platforms can and will exercise an increasing amount of economic or editorial power over their existence without a proper regulatory framework or transparency.

With Facebook and Google alone forecast to take 63% of US digital advertising revenue in 2017, according to market research company eMarketer, the ability for other types of advertising-based content businesses to thrive on the web is dwindling.

In the same week that the tech companies appeared on the Hill to testify, billionaire Republican businessman Joe Ricketts abruptly shut down the unprofitable but respected local website networks DNAInfo and Gothamist. After he described unions in September as “corrosive” to business success, it was widely assumed that part of his reasoning was the unionization of the workforce at the hyperlocal news sites. But a major reason was the websites’ continuing unprofitability as advertising dollars have drained away, weakening particularly smaller news providers.

If platform companies continue stepping into the shoes of media companies, they should also be thinking of ways to make high-quality journalism more sustainable. In the past, this, too, has been taken care of to some extent by market regulation, whether it was the rules around local press and TV ownership, or monopoly controls, or even subsidies for public service media.

The proliferation of fake news and propaganda on Facebook and YouTube has partially been caused by a platform design that rewards content that generates the most attention irrespective of quality. The lack of self-regulation that caused this has been dressed up as platform “neutrality.”

Regulation of platforms in terms of size alone would not necessarily be a remedy for this: Five Facebooks is arguably no more help in funding journalism than one, if they all follow the same principles of optimizing their performance only for short-term profit and not a wider civic aim.

Follow CNN Opinion

  • Join us on Twitter and Facebook

    The job of controlling malicious and misleading content is not going to aid democracy as much as it might if there is no sustainable model for high quality free news and information that emerges alongside it.

    If producers of news need to be protected from economic ravages, then news consumers also need to have their rights supported when it comes to the targeting and circulation of material. The lack of transparency in current commercial messaging, targeting and promotion of content runs counter to the creation of a media-literate society. We urgently need a more rigorous approach to definition and provenance of the different types of “content” on our social media platforms.