You Twitfaces Round Two

Twitter, Facebook and Google (collectively known here as You Twitfaces, with acknowledgements to Benny A) have all been attacked in the old media lately.  There have been so many critical news stories and comment pieces that it’s hard to keep track of what the real issues are, who’s involved and what might happen next. What’s behind all this, especially as none of the issues is even new? Here I’m disentangling the four – as I think there are four – key concerns, puting them into some kind of timeline and pointing to a few useful sources or experts worth checking out. I’ve already flagged up some in previous posts on social media and Internet research here and here. Along the way I’ll be explaining my trip to the Royal Institution. We’re looking at hate speech, fake news, data collection and surveillance, and content theft.

First up, the Home Affairs committee, a select committee of the UK House of Commons, started investigating hate crimes and social media last year prompted largely by the murder of Jo Cox MP. The committee is now suspended because an election was called, so they had to rush out their report Hate crime: abuse, hate and extremism onlineIt was published on May 1st. As these things go it is readable and not incredibly long, and if you look it up you will get a flavour of how angry the cross-party MPs were with the corporate spokesmen (yes, they were all men) and their feeble excuses.  Witnesses who gave evidence, both individuals and organisations, are listed separately with links to the written evidence. Oral evidence is minuted. The Social Data Science Lab at Cardiff University submitted detailed well-grounded evidence based on large-scale research projects into hate crime in the UK.  They noted that most online hate speech research has been conducted on social media platforms (mainly Twitter and Facebook) but there’s a need to examine hate on emerging platforms and online gaming. They recommended more research into the relationship between hate speech online and offline hate crime.

The corporate spokesmen questioned by the committee were Peter Barron, Vice President, Communications and Public Affairs, for Google,   Simon Milner, Policy Director for the UK, Middle East and Africa, for Facebook, and Nick Pickles, Senior Public Policy Manager, Twitter. The report is scathing about their answers and evidence (available in the minutes for 14 March , and it’s eye-opening). They can’t defend the examples of hate speech, child pornography or illegal extremist content they are presented with, and don’t try. Instead they fall back on their community ‘standards’, relying on users to flag content, and on trying to improve their algorithms. They refuse to say how many human moderators they employ or how much they spend on making sure content that is illegal or violates their terms gets removed. The committee points out that when content breaches copyright, it seems to get removed much faster so they obviously have the means and they certainly have the money. A flavour of the report as a word cloud:

CommitteeReport

There are many extraordinary, wriggly exchanges. Peter Barron tries to defend Google allowing David Duke’s antisemitic videos to stay up on YouTube (which Google owns). He says their own lawyers decided the content wasn’t actually illegal. The chair points out that their own guidelines say they don’t support hate speech. Barron then tries to fall back on an alternative free expression argument which shreds any idea that their community standards mean anything.  In other exchanges Nick Pickles tries to defend Twitter’s failure to deal with volumes of abusive racist tweets directed at MPs. Simon Milner tries to defend holocaust denial material on Facebook on the grounds that it attracts a few counter-comments. The MPs make their disgust pretty plain, especially when they finally force the spokesmen to admit that whether they want to or not, their companies do in fact make money out of malicious, hateful and even illegal content.

The constant excuse that they rely on users to report abuses doesn’t go down well with the MPs. In the report’s words, ‘They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. One MP tells them he would be ashamed to make his money the way they do. They really don’t like being told that and they seem to think that saying they work with ‘trusted flaggers’ such as specialist police units will work in their favour. That backfires. The report points out that if these social media companies earning billions in annual operating profits are relying on a publicly funded service to do their work, they should repay the cost.

So what does the report recommend? Briefly:

  • that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse.
  • Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.
  • Government should consult on adopting similar principles online to those used for policing football matches —for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves.
  • social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. Government should consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
  • social media companies should review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.
  • social media companies should publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future.  If they refuse, the Government should consult on requiring them to do so.
  • Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. They should use their existing technology to help them abide by the law and meet their community standards.
  • Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.

After the election, the committee will move on to considering fake news. In a BBC Panorama programme broadcast on May 8 you can see clips of the committee hearings and comment from MPs, ex-Facebook employees and others. The interview with Simon Milner from Facebook is excruciating: he keeps repeating the same bland unconvincing statements rather than answer the questions. A disaffected ex-colleague of Mark Zuckerberg, Antonio Garcia Martinez, comes out with what he thinks Facebook really thinks about all the fuss (my transcript): You know what’s naive? The thought that a European bureaucrat who’s never managed so much as a blog could somehow in any way get up to speed or even attempt to regulate the algorithm that drives literally a quarter of the Internet in the entire world. That’s never going to happen in any realistic way. A more fully developed version of this arrogant claim and what it’s based on can be read here. It amounts to little more than We’re super-rich, you losers, so we’re above the law. So yes, the social media outfits have lots of data on us, can manoevre around different legal restrictions because of their global reach, point to their supreme algorithms and fall back on defending free speech. But ultimately they are basically just vast advertising companies, and what does hurt them is advertisers cancelling contracts. That did start to happen after a few recent revelations. Their free speech argument doesn’t work any longer once it is pointed out that they are making money out of racism, terrorist recruitment sites and child pornography, by running ads alongside such nasty content,  and are also tainting reputable organisations and businesses by linking their ads to it. Free speech has nothing to do with that.

Facebook had to deal with charges that they allowed fake news to influence the US presidential elections. They responded first with denials,  from Zuckerberg personally, but there was too much evidence to ignore so they’ve moved on to Plan B, blaming the users. It’s our fault. On May 8, Facebook ran ads in the old media telling us what to do, following on from news stories that they’re hiring 3,000 more people as content checkers and are yet again tweaking their algorithms. It should all have been great PR but their advice on spotting fake news was unaccountably mocked. Here’s another word cloud based on Facebook’s kindly advice to us all.

FacebookAdvice

How much of a problem is fake news? Last week there was a debate at the Royal Institution,  Demockery and the media in a ‘post-factual’ age, hosted by Sussex University, with a panel of journalists from various organisations. I guess the panellists were meant to represent a range of media as well as provide gender balance and as seems to be too often the case, the balance didn’t work. Kerry-Anne Mendoza from The Canary, and Ella Whelan from spiked-online, were under-informed and relied too much on assertion and opinion. Neil Breakwell from Vice News, a former Newsnight deputy editor, and Ivor Gaber, Professor of Journalism and a former political journalist, simply knew a lot more and so were more interesting. This unbalanced ‘balance’ happens too often and I’m going to call it out every time.

Asked about fake news, Mendoza didn’t even want to use the term as she claimed it had been tainted by Trump. (She also claimed that The Canary was a left-wing alternative for an audience who would otherwise be reading right-wing tabloids. The other panellists thought that claim about its readership was pretty unlikely, and she had no evidence for who its readers actually are.)   Whelan in true spiked contrarian style disputed there was even an issue because it would be patronising to suggest anyone was influenced by fake news. Gaber made the most serious and telling point, that it’s not about quibbling over truth or accuracy but it’s about intent.  That’s the real reason we should care about fake news. Unfortunately the discussion as a whole didn’t pursue that enough, surprisingly given the salience of current investigations into attempts by the far right and Putin’s regime to interfere with and disrupt democratic elections in the USA and France (at the very least). Mendoza, Breakwell and Whelan seemed mostly concerned to establish their own credentials as reliable sources with good editorial fact-checking practices. There was an intriguing moment when they all ganged up against Buzzfeed for putting last year’s leaked allegations about Trump and Russia online without any corroboration. The chair, Clive Myrie, stopped that as Buzzfeed weren’t present. Given the event’s own blurb, which included this quotation from the World Economic Forum: The global risk of massive digital misinformation sits at the centre of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of governance, the discussion hardly matched up.

Now let’s have another word cloud. This one is from Facebook’s own document published at the end of April, Facebook and Information Operations v1. We can probably take it that the timing of this document, days before the select committee report, was not much of a coincidence. Carrying videos of actual crimes including murder on Facebook Live hasn’t helped their case much either.

Facebook cloud

(I’m making word clouds partly to lighten this long post with some images but I’m also finding they do help show up different emphases in the sources I’ve cited.) The document explains what they are trying to do, and it’s all so bland and  minimal you can’t see why they are not already doing that stuff. But Facebook really, really doesn’t want to get into censoring (as they would see it) online content, even though that is what they do some of the time. Their rhetoric uses the language of ‘community’ and ‘shared responsibility’. Here’s what some other commentators have to say about that.

Sam Levin: Zuckerberg has refused to acknowledge that Facebook is a publisher or media company, instead sticking to the labels of “tech company” and “platform”.

Jane Kirtley: It’s almost like these are kids playing with a toy above their age bracket. We surely need something more than just algorithms. We need people who are sentient beings who will actually stop and analyze these things.

Jim Newton: Facebook wants to have the responsibility of a publisher but also to be seen as a neutral carrier of information that is not in the position of making news judgments. I don’t know how they are going to be able to navigate that in the long term.

Edward Wasserman: If they are in the news business, which they are, then they have to get into the world of editorial judgment.

Jonathan Taplin: Facebook and Google are like black boxes. Nobody understands the algorithms except the people inside.

Paradoxically, the quotations above were to do with Facebook actually censoring, rather than failing to censor content. The row blew up last year when Facebook took down the famous photograph of a Vietnamese child running naked in terror after a napalm attack on her village. In Taplin’s words, ‘It was probably [censored by] a human who is 22 years old and has no fucking idea about the importance of the [napalm] picture’. After this particular row Facebook stopped using human censors and began relying on its algorithms which allowed through floods of fake news content. Because they collect so much data on users and deploy it via their famous, but secret, algorithms, those streams of fake news were also targeted. So it’s easy to see that now Facebook doesn’t only not know what it’s doing, it doesn’t know how to defend what it’s doing. Easier to (1) blame the users, although the word they would use is ‘educate’, (2) say we’re so big you can’t touch us and anyway it’s far too complicated.

Taplin, quoted above, has a new book out that’s just been well reviewed, Move fast and break things. I’d like to read it but as it has a critique of Amazon as well as the social media companies, I’m going to have to wait.

Taplin is particularly concerned about the effect huge companies such as Google and Amazon have had on smaller businesses around the world that they’ve crushed or swallowed up, and their wholesale theft of the creative products of others.  I should own up here that the cartoon above is from xkcd.

What’s to be done, and what can any of us do? Lawmakers despite what Martinez has to say could start to act if they feel their own safety is threatened, or elections are being heavily influenced. The far right social media connection to the murder of an MP, and evidence coming from France and the USA about deliberate interference, can’t be ignored completely. Anne Applebaum (one of the experts I recommended here) was the victim of a smear campaign after she wrote about Russia’s actions in Ukraine, and has described how the fake news channels work and how effective they can be.  Over at the Oxford Internet Institute (OII), there’s a research project on Algorithms, Computational Propaganda, and Digital Politics  tracking political bots. It has scrutinised what happened with the French presidential elections as well as in the USA.  Philip Howard, the new Pofessor of Internet Studies at the OII, and Helen Margett, the Institute’s Director, have both complained recently that the giant social media companies, who have collected so much data on us, are not releasing it to independent academics so that it can be properly researched and we can get a handle on what’s going on. Howard even calls it a sin.

Social media companies are taking heat for influencing the outcomes of the U.S. presidential election and Brexit referendum by allowing fake news, misinformation campaigns and hate speech to spread.

But Facebook and Twitter’s real sin was an act of omission: they failed to contribute to the data that democracy needs to thrive. While sitting on huge troves of information about public opinion and voter intent, social media firms watched as U.S. and UK pollsters, journalists, politicians and civil society groups made bad projections and poor decisions with the wrong information.

The data these companies collect, for example, could have told us in real-time whether fake news was having an impact on voters. Information garnered from social media platforms could have boosted voter turnout as citizens realized the race was closer than the polls showed – and that their votes really would matter. Instead, these companies let the United States and UK tumble into a democratic deficit, with political institutions starved of quality data on public opinion.

Margetts, in Political Turbulence, makes the point that such research is still feasible now but with the advent of the Internet of Things it could become completely impossible in future, and politically we would be moving into chaotic times.

After all this it might be a relief to end with a bit of optimism, so here are a handful of possible reasons. The political bots research unit doesn’t think attempts to influence the French elections really worked this time. Ivor Gaber reckons that because the UK (print) news media is so biased and sensationalist, fake news has less influence here because readers don’t believe what they read anyway. Taplin reckons that the younger digital tycoons – he’s thinking of Zuckerberg – care about their public images enough to want to make changes so they are not seen as evil-doers. (Google famously had ‘Do no evil’ as their original mission statement but that was last century.) Matthew Williams and Pete Burnap, of the Social Data Science Lab mentioned above, gave the select committee some evidence that other users confronting racists on Twitter did seem to have an effect. I’ll quote it in full, as it’s couched as useful advice.

Extreme posts are often met with disagreement, insults, and counter-speech campaigns. Combating hate speech with counter speech has some advantages over government and police responses: it more rapid, more adaptable to the situation and pervasive; it can be used by any internet user (e.g. members of the public, charities, the media, the police); and it draws on nodal governance and responsibilsation trends currently prominent in the wider criminal justice system.  The following typology of counter hate speech was identified:

Attribution of Prejudice

e.g “Shame on #EDL racists for taking advantage of this situation”

Claims making and appeals to reason

e.g. “This has nothing to do with Islam, not all Muslims are terrorists!”

Request for information and evidence

e.g. “How does this have anything to do with the colour of someone’s skin??”

Insults

e.g. “There are some cowardly racists out there!”

Initial evidence from ongoing experiments with social media data show that counter speech is effective in stemming the length of hateful threads when multiple unique counter speech contributors engage with the hate speech producer.  However, not all counter speech is productive, and evidence shows that individuals that use insults against hate speech producers often inflame the situation, resulting in the production of further hate speech.

There’s a lot to keep an eye on. I’ll be catching up with You Twitfaces again in a while. Meanwhile here’s Google shyly hiding again.
google6

 

You Twitfaces Round Two

3 thoughts on “You Twitfaces Round Two

Leave a comment