Mosul, music and citizens in an age of terror

The final liberation of Mosul from IS is being regularly announced now and one of these days it will actually be true.  Three years ago IS used the ancient and highly symbolic al-Nuri mosque to declare its  so-called caliphate. A few days ago they blew up the mosque, before it could be recaptured, and have taken to sending out teenage girls as suicide bombers from their last few hold-outs in the old city. Around a million people from the Mosul area have been forced to flee for their lives. It’s unimaginable. It’s as if a city the size of Birmingham had been occupied by a death cult that destroyed its concert halls and libraries, murdered, raped and tortured entire communities and banned everything human, all artistic endeavours, everything that didn’t conform to their own ultra ‘pure’ version of religious conformity.

I’ve posted about Mosul’s history before in this blog, and here I want to commemorate three things: Mosul’s one time diversity from the days when culturally and religiously plural societies were a normal feature of the entire area, music from Mosul, and ordinary citizens choosing to keep track of events in Mosul and communicate to the rest of the world. Those citizen experts are anything but ordinary.

Joel Wing

You might have caught Joel Wing being interviewed on BBC radio news recently. He is an expert on what’s happening in Iraq but his background is extraordinary. He runs the Musings on Iraq blog and updates it constantly with news about what’s happening in Iraq, currently mostly the battle to retake Mosul. Joel Wing’s blog has become fairly well known and his expertise is trusted by a lot of journalists and analysts.  It turns out he’s a history teacher from Oakland, California who decided back in 2008 that since the US was involved in Iraq people ought to know what was happening, and became an expert through sheer continuous dedication and hard work. He collates news from English and Arabic language sources and presents clear summaries, along with reminders via Twitter of what happened on the same date in past years going back to at least 1991 (the war following Saddam Hussein’s invasion of Kuwait). His latest post, as I’m writing this, tells us that Iraqi officials are going to announce final victory today over IS in Mosul, but they’ve made similar impatient announcements before and it’s unlikely that IS will be totally defeated today. Plenty of commentators including Wing are pointing out that ridding Mosul of IS presence won’t mean that IS is finished. Controlling cities or large stretches of territory is  only one of their strategies and they’ll continue to spread their ideology, hatred and violence – predominantly towards other Muslims – by other means.

Also today, there are estimates that it will cost a billion dollars to rebuild Mosul but as for repairing the harm done to the people of the city, especially children who have lived through three years of terror, there are no estimates. It will take a long time. UNHCR and UNICEF, Save the Children and no doubt lots of other NGOs are launching appeals.

The Mosul Eye and Mosul music

The Mosul Eye blog is heartbreakingly optimistic. It was set up by an Iraqi historian based in Mosul who has planned reconciliation events, a bring-a-book festival to help restore Mosul’s libraries, and an astonishing violin recital among the ruins to make the point that music can now return to the city in the face of terror. Like the Taliban and the jihadists who took over in Mali, IS banned music and threatened the lives of musicians. The Mosul Eye blogger arranged for a few Mosuli musicians to return and play in one of the ancient Nineveh sites IS tried to destroy. You can hear Ameen Mukdad, the violinist, here with explosions and gunfire in the background.

I played on a track on the Rivers of Babylon Treasures CD, Eliyyahu Eliyahu,  which is in the Mosul tradition of Iraqi Jewish music going back at least eighty years but probably much further. It’s the only existing recording as far as I know. It took me a long while to work out that some of the notes don’t exist in Western musical scales because they’re the notes in between those other notes. I recorded my father singing it when he was in his 80s and that’s when I realised I had transposed some of the tune into a more familiar sounding key in my memory, and had to flatten the notes back to where they should go. I always thought of the song as wistful, and about hoping for a better future when the prophet Elijah eventually shows up.

Next week I’m going to see Songhoy Blues again, a band formed by Malian musicians in exile. They played in London in 2015 alongside the showing of a film, They will have to kill us first,  about musicians daring to return to Timbuktu to give a free concert and inspire local people to hope for a return to normality. Their new album is called Resistance.

Old Mosul communities

This photo is of a Christian monastery near Mosul, St Elijah’s or Mar Elias

(http://creativecommons.org/licenses/by-sa/4.0-3.0-2.5-2.0-1.0)

512px-Saint_Elijah's_Monastery_1_Mosul

It was the oldest Christian monastery in Iraq, dating back to around 600 CE. It was destroyed by IS probably in 2014. Mosul used to have the highest proportion of churches of anywhere in Iraq – Catholic, Chaldean, Assyrian, Syrian Orthodox, and probably others I don’t know about. My father remembered going inside churches as a child and seeing what he described as ‘big dolls’ (statues presumably). At one time there were at least twenty churches, not to mention the five synagogues and nearby Yezidi shrines. It’s an understatement to say that communities didn’t always live amicably together in the past but even so, Mosul’s long history is one of many diverse groups living side by side, including many Kurds. From the 1970s the Ba’athists started a Stalinist-style deportation campaign, removing Kurds from northern Iraq and dumping them in southern deserts. It’s estimated that 300,000 may have been killed. This transfer policy also involved settling Ba’athist supporters in the north.

This news story describes Muslims in a liberated area of Mosul helping to rebuild a destroyed Chaldaean church as a gesture of solidarity, but it’s not certain that Christians will feel safe enough to return.  Kurdistan is now seeking a referendum on independence. Relations with the government in Baghdad are pretty tense and although Kurds have taken in hundreds of thousands of refugees, and have lost many of their own fighters. it’s been without much help from Baghdad. There’s an all-party parliamentary group on Kurdistan with a  website here  and there was a Commons debate on Tuesday which discussed the medical and psychological help needed following Mosul’s liberation and what the UK can do –  the proceedings are now available. The Kurdish referendum issue is going to be fraught.

I’ve been recommending experts in this blog (see here and here ) as well as occasionally writing about Mosul. If you’re interested in following news from Iraq, I’d recommend Joel Wing as a real expert, of the responsible citizen type.

[Blog housekeeping note: the menu in the top righthand corner shows up OK on some devices and platforms but not others. If you can’t see a menu of links to previous posts try expanding the set of lines at the top and they should turn into a menu. I’ve been trying to find out how to fix that and make them more visible but it seems a general WordPress design problem.]

 

Mosul, music and citizens in an age of terror

Are we nodes or are we noodles?

The new Professor of Internet Studies at the Oxford Internet Institute, Philip Howard, gave his inaugural lecture last week. It’s now available online but to save your time, I watched it and summarised what I thought were the most interesting bits, for the fourth of these posts on fake news (previous posts here, here and here). There was a certain amount of flummery at the start – not the soft pudding type – that you can skip if you decide to watch it.

flummery Flummery pudding, also known as mahalabia

Also of course, some daft clothes. But despite the Oxfordy business the OII is a useful place to know about and has done good research ever since it started. I went to the launch conference back in 2002 when I was researching internet related stuff for a doctorate. I liked their ethnographic style, thought it looked promising then and think it’s delivered since, for instance with regular surveys of British users and non-users of the internet, critical studies of Wikipedia, and a strong focus on ethical issues. The launch was at the Said Business School, the building with the ziggurats near the railway station, as the Institute itself is housed in a small building on St Giles near Balliol with no space for large events.

oii logo

Fifteen years ago at the OII launch the conference ran a session on ‘Participation and Voice’, asking whether the technology would improve or worsen the democratic process. This month Phil Howard asked something similar: Is social media killing our democracy?

He began by arguing that ‘the Internet’ is misnamed as there are now multiple internets. There’s a Snapchatty Yik-yakky one for under-17s that people like him don’t use. Far right conspiracy theorists get together on another one.  China has its own internet, built from the ground up as an instrument for social control and surveillance. Some argue that Russia and Iran have the same thing – a distinct internet. The cultures of use are so different it’s tough for researchers to study them all. The Prof then briefly narrated the development of his research by showcasing some of his publications, as he’d been coached that was the right thing to do in an inaugural lecture.

His first book, New Media Campaigns and the Managed Citizen (2005) was an ethnography of  the Gore and Bush US election campaigns. The people he studied and got to know were the first of a new breed of e-politics consultants.  He discovered that ‘a small group of people – 24 or 25 – make very significant design decisions that have an impact on how all of you experience democracy’. These people formed a small community, socialised together and worked   ‘across the aisles’ for Republicans or Democrats as needed.  At the end of the campaign several of them went off to work in the UK, Canada, Australia and various other countries,  to take the tricks they had developed in public opinion manipulation, funded by big money, to apply them in democracies around the world. His conclusion was that this is how innovation in political manipulation now circulates, i.e. via these kinds of roaming consultants with expertise for hire.

Next up, he turned to investigating the consequences of internet access in 75 mainly Muslim countries in a book that, amazingly, you can download for free . His idea was to see how things worked out in societies where censorship and surveillance are permitted and encouraged as a means of cultural protection; countries that liked to participate in the global economy but in constrained ways. He observed significant changes in gender politics, in where people went to learn about religious texts, and above all young people using information technologies to figure out that they shared grievances. He found a clear arc from the mid-2000s to the ‘Arab Spring’.  So while his first book was about political elites and the manipulation of democracy, the second was about catching the elites off guard.

His work on ‘the internet of things’ , Pax Technica, was more predictive and although the book wasn’t well received he was insistent that it’s necessary to pay attention, look back at what has already happened to online privacy and look forward to guard against what could happen next.  He reckons the privacy fight is already lost as far as the current internet(s) are concerned so we need to think ahead. To quote:

The internet of things is the next internet, the one you will not experience through a browser. It will track you in almost everything you do. For a variety of reasons it will generate almost perfect behavioural data that will be useful to social scientists but also governments and security services. In the next few years we have to wrestle with who gets this data, and why, and when…

By 2020 50 billion wireless sensors will be distributed around the world – there will be many more devices than people, to say nothing of satellites, drones and smartphones that people carry. There will be vast amounts of behavioural data on consumption habits, and in democracies any lobbyists who can will try to play with this data…

The average smartphone has 23 apps. The average app generates one location point per minute – little bits of evidence of where we are in physical space…few organisations have the analytical capacity to play with this data – Facebook does. Few do much with it – advertising firms do. Some apps read each other’s data. It’s fodder for an immense surveillance industry that’s about providing you with better consumer goods, identifying your location in space, providing data brokers [with info on us]…

His new programme of research at the OII is looking at social media, fake news, and computational propaganda, or in other words, algorithms and lies. Here are a couple of tasters. How to identify a bot: there are some giveaways. Bots tend to do negative campaigning. They don’t report positive policy ideas or initiatives.

Anger, moral judgments, pictures of politicians taken at a ridiculous angle ‘saying’ things they probably never said. Bots migrate from another topic to another e.g. latching on to Brexit after years tweeting about something else. A small handful of accounts after working on Brexit then became interested in the US election and were pro-Trump. A small number then became interested in the Italian referendum and the French elections and now back to the UK. Just a s there was a cycle of expertise from human subjects in the US who took the craft of political manipulation across multiple domains, multiple  regime types, there are now users who have humans behind them, social media accounts which have humans behind them, user accounts that craft political messages, moving from target to target, meddling in particular domains, as needed and one of the great research questions that faces us now is who are these people and to some degree how do we inoculate our democracies against their ill effects? 

Howard prefers to call them highly automated accounts because there is always a human behind them. They do not look like this.

computerBot

These automated accounts are not up and running all the time. They get turned off after an election. They have noticeable changes of strategy in response to events, for instance spikes in activity at particular moments to coincide with debates.  Howard thinks all this presents us with real problems and that social media has made democracy weak. ‘It has a compromised immune system. We’ve gone through that learning curve from social media as exciting opportunity for activists to tools for dictators.’ To make matters worse, people are selectively exposing themselves to secondhand sources of information that intensify what they already believe in a process that might be called elective affinity, so any bias doesn’t meet with much challenge.

We need to figure out what the opposite of selective exposure is.  Diversified exposure? We don’t even have a phrase. Randomised encounters? Empathic affinity? Process that allows people to encounter a few new pieces of information, candidates that they haven’t met before or faces they don’t recognise. Whatever those processes are we have to find them and identify them and encourage them.

Before reporting any more of what Howard thinks I should allow for some diversified exposure here and point out that there are other academics who might disagree. Here’s Daniel Kreiss,  who has studied political campaigning, taking a markedly different view. He says basically it’s more important to look at the history of how conservatism has been growing in the US than at social media. As for the UK, other academics agree that ‘whether done by bots or human influencers, that people may be surreptitiously emotionally engaged in online debates is deeply worrying’ and there’s plenty more rather tentative comment here.

Going back to the lecture, Howard ends with proposals for how this abundance of data on all of us might be regulated. He has a list.

  1. Report the ultimate beneficiary. You should have the right to find out who is benefiting from data being collected by an item you buy.
  2. It should be possible to add civic beneficiaries to benefit from the data.
  3. Tithes of data. 10% of the bandwidth, processing power and data should be made available to civil society organisations as a way of restoring some balance. Facebook has a monopoly platform position on public life in most countries. That should stop.
  4. A non profit rule of data mining. The range of variables that are exempt from profit should grow.

It’s not surprising to see Facebook’s data monopoly appearing here. He’s said elsewhere that researchers can only use a small percentatge of Twitter data, because that’s what is made accessible, and can’t properly research Facebook even though that’s where a lot of the political conversation – and manipulation – is happening. Facebook doesn’t share.

The Computational Propaganda project at the OII has just released its first case study series, covering nine countries, available here. It’s been covered in a few news articles (Wired, the BBC, Guardian and so forth). A brief snippet to give you the idea what’s in it:

The team involved 12 researchers across nine countries who, altogether, interviewed 65 experts, analyzed tens of millions posts on seven different social media platforms during scores of elections, political crises, and national security incidents. Each case study analyzes qualitative, quantitative, and computational evidence collected between 2015 and 2017 from Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States.

Computational propaganda is the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.

That’s enough lecturing. I called this post Are you a node or a noodle? for a reason. Clearly, we’re all noodles as we’re all likely to be suckered at some point by fragments of the fakery that’s all over whichever internet we’re using. In one of Howard’s books that I’ve actually read or at least skimmed, he surveys the work of one of my favourite experts Manuel Castells. Castells and the media is really an introductory reader for students who haven’t yet read Castells, which is fair enough as reading Castell’s own work is a real undertaking. (I’m aiming to add him to my experts series on this blog soon.) Howard summarises one of Castell’s key theories about the network society as ‘People may think they are individuals who join, but actually they are nodes in networks‘.

We’re all nodes as well as noodles. My takeaway message is to be careful about what we’re circulating. Every large scale tragedy or atrocity now seems to attract lies, myths and propaganda that get wide circulation through deliberate manipulation but also via unwitting noodle-nodes (us, or some of us). Howard suggests (it is a textbook after all) that readers undertake an exercise in visualising their own digital networks. I’m not going to bother with the exercise but some of the other recommendations were good ones, such as:

  • be aware of your data shadow (yes it is following you)
  • use diverse networks
  • be critical of sources and try to have several
  • be aware of your own position in digital networks
  • remember that people in other cultures have different technology habits, and that networks can perpetuate social inequality
  • understand that you are an information broker for other people.

Thanks Phil. Enjoy your new job.

 

 

Are we nodes or are we noodles?

How Peckham Does God

 

Blake Peckham Rye

Peckham and godliness might not seem an obvious combination if what you’ve heard has mostly been about gangs, riot and murder. If you’ve travelled through here early on a Sunday you will know different. Katrin Maier quotes her own fieldnote, and it chimes with what I’ve seen:

It is about nine am on a Sunday morning and I am on the way to the Sunday Service at RCCG Tower of God. There is little traffic on the roads, but the bus from Peckham to Tower of God on the Old Kent Road is packed. Most passengers are well-dressed black men, woman and children, some holding Bibles. One passenger jokes that the Bus 78 that takes us to the churches on the Old Kent Road has become a ‘church bus’.

The picture above shows a mural not far from Peckham Rye, where William Blake said he had seen a vision of angels in an oak tree. Angelic Peckham Rye.

Blake text

This area of London has plenty of other religious communities and buildings, but although I have no special expertise I’m going to focus here on its many types of Christianity, since what is most striking about how Peckham does God is the sheer number of newish and in traditional UK terms, non-traditional churches and chapels. I first started noticing their names years ago and meant  to write about those, but now I’m less concerned about the language of church names and more interested in how their congregations do and don’t fit into their localities. Some of the signs I’ve noticed round and about:

  • Freedom Centre International
  • New Congregation of Cherubim Last Vessel of Salvation
  • True Christian Bible Church (Pentecostal)
  • This is Christ Temple Christ’s Gospel World Outreach Evangelistic Ministries
  • Gospel Auditorium We are unstoppable achievers
  • The Holy Order of Cherubim and Seraphim Movement
  • Mountain of Fire and Miracles Ministries
  • Beneficial Veracious Christ Church Miracle Centre

I’m drawing on three key sources, as well as my eyes and ears. John Beasley, a local historian, compiled a short history of Peckham and Nunhead churches from the 18th century up to 1995. Katrin Maier, quoted above, based her 2011 doctoral thesis Redeeming London on research into Nigerian Pentecontalism and more specifically the community around the Redeemed Christian Church of God (RCCG) which had a branch on the Old Kent Road. Being Built Together: A Story of New Black Majority Churches  in the London Borough of Southwark (BBT) is a 2013 research report by Andrew Rogers of Roehampton University.  This longish post might be mainly of interest to Peckhamites as I know some read this blog, but it might also resonate with other  readers if you’ve noticed a changing religious landscape locally and wondered how it relates to the rest of the community.

Peckhamite

What shocked me about Beasley’s historical listings was the sheer number of devastated churches. There is still some wobbly wartime glass in my home, meaning that previous windows must have been blasted out during WW2, but I hadn’t realised the extent to which Peckham had been bombed. Twenty two churches were hit, and some were completely destroyed: St Mary Magdalene, St Mary’s Road; Salvation Army halls, Nunhead; St Jude’s, Meeting House Lane; the Dissenters’ Chapel in Nunhead Cemetery; St Chrysostom’s, Peckham Hill Street; St Mark’s, Woods Road;  Avondale Road Unitarian Church. Others suffered huge damage and were unusable and not rebuilt for many years, if ever: Christ Church, Old Kent Road; St Luke’s, North Peckham; St Anthony’s, Nunhead Lane; St Andrew’s, Glengall Road;  Rye Lane Chapel; Norfolk Street Baptist Chapel; St Saviour’s, Copleston Road; Cheltenham College Mission, Nunhead Grove; Licensed Victualler’s Chapel, Asylum Road; North Peckham Baptist Church; Peckham Park Road Baptist Church;  Peckham Rye Tabernacle, Nigel Road; Clifton Congregational Church, Asylum Road; Hatcham Mission (Wesleyan), Tustin Street. The bomb that damaged St Silas in Ivydale Road also killed the vicar.

High explosives, flying bombs, incendiary bombs and land mines all hit Peckham because it wasn’t far south from the City and docklands, and vulnerable to German planes that hadn’t got through to more central targets. There’s a map here  showing where the bombs landed in Peckham Rye ward, and on the same site you can see maps of bomb strikes in neighbouring wards The Lane and Peckham. The destruction of homes was worse and led eventually to huge new estates being built with everything that followed. Southwark Council became one of the country’s largest landlords.

Pentecostal and Apostolic churches eventually took over some of the premises that the Anglican and Baptist churches hadn’t been able to restore for their own use. But the real change started from the 1990s onwards when the new Black Majority Churches (nBMC is the acronym the BBT report uses for churches of that type founded since the 1950s) took off at scale. Re-used old church premises were a starting point in the past but the present looks very different. Here is the headline info from the BBT report:

  • Southwark is ‘the African capital of the UK’ .
  • 252 nBMCs were identified in the borough.
  • That is more than double the total number of historic / new / independent churches in the borough.
  • Nearly half of these are in one postcode (SE15, i.e. in Peckham). The researchers comment ‘We might speculate that this represents the greatest concentration of African Christianity in the world, outside of Africa’.

I’m not trying to summarise Maier’s research or the BBT report as they are long and based on years of detailed work. Their research methods and aims are very different. Maier was embedded as a participant in RCCG communities, going to services and getting to know congregants in London and Nigeria. She is interested in culture and community, and how migration, gender and religion all interact, and she has rich and recognisable descriptions of the visible world of Nigerian Pentecontalism in Peckham. Shopping here can be divine.

DivineShopping

The BBT  research report is sensitively written and emphasises positive aspects, as well as identifying issues the local council should be more aware of, and some that the churches themselves need to consider. It mainly sticks with relatively  easier matters such as parking, noise , neighbours’ complaints, the pressure on rental spaces on industrial estates in competition with small businesses,  and problems with the concentration of a lot of churches in a few areas.

They both discuss, cautiously, how the churches do or don’t relate to the surrounding culture or deal with racism and suspicion. For instance Maier points out that African Christianity can be stigmatised because of high-profile cases involving exorcism and child abuse, while church members worry about the corrupting influence of liberal attitudes towards child-raising. There certainly have been some shocking scandals, although not directly concerning RCCG. The ‘miracle babies’ pastor Gilbert Deya who was tried for child abduction had a church in Peckham. Mega churches such as Lighthouse, originally Ghanaian,  which has a cathedral nearby, and UCKG from Brazil which has a branch in Peckham, have been described as cults which exploit their followers and accrue wealth for their founders.

The BBT report shows that many congregations are multi-ethnic although they have few white members. Some factors cut across each other. RCCG as a vast global organisation aims to set up more churches, and have them as close as five minutes’ drive from each other. The BBT report recommends fewer new setups and more consolidation or sharing of premises, in response to local concerns and problems. Some of the survey and interview respondents agreed there could be more cooperation but  I wondered whether their competing interests to the extent that they are businesses, let alone their competing theologies, might get in the way. One pastor is quoted as saying that certain others had ‘…a very different kind of mentality about church, it’s more like a business for many of them and you know, the competitive spirit that they bring to church’.

I hadn’t realised before reading the BBT report that some  new Peckham churches serve ethnic communities to a greater extent than local communities, or in other words, congregants tend to come from across London and beyond…’because we are ethnic driven, people do not live locally. They are coming in from north, south, east and west, they drive in, they park, they come into the service. So straight away you’re not local’. 

They are in Peckham partly because that’s  where they found cheap enough premises. But being based on industrial estates can make it harder to have any engagement with the local community, especially if they have to move on when a lease runs out…‘then you’re trapped in a dangerous dynamic because the more you move the more ethnic you remain.  When you finally settle down you can consider trying to be more relevant to the local population but because you’ve been, consolidating your ethnicity, you find that by the time you’ve settled down you really need some kind of genetic mutation to happen’.  Some of the churches in the BBT survey said they had little or no engagement with the local community.

Does that mean Peckham is not so godly after all? Here’s a selection of post-it notes displayed on the Peace Wall. Following the 2011 riots (I stepped out of Peckham Rye station as they were kicking off and it was horrible, and terrifying) thousands of local people stuck  messages on post-it notes on boards over broken shop windows on Rye Lane. They’ve been preserved as a permanent exhibit near the Library.

Peace post-it1Peace post-it2Peace post-it6Peace post-it3Peace post-it4

I’m not suggesting the rest of us who live here or the people who run bars, cafes,club nights and sourdough bakeries without bringing God into it care any less about Peckham. And maybe there is more in common with old-school missions to Peckham than you’d think at first. In the nineteenth and early twentieth centuries clergymencomplained about how hard it was to get working men to go to church – they were too tired, it was expensive and anyway they were frankly indifferent – although this was an area of poor crowded slums that religious movements then felt  needed intervention. The settlement movement brought young men and women from middle class schools and Oxbridge colleges to live and work among the poor. There are still vestiges nearby, such as the Blackfriars Settlement. 

I have no idea if or how Peckham’s local cultures will merge or adapt, but it would be good for us to get to know more about each other.

 

How Peckham Does God

A fake news flow tracker

‘The Russians were pioneers. They understood that social media could be manipulated long before other people did.’

In a recent interview, available as a podcast here, Anne Applebaum explained why fake news in the context of the US elections isn’t really news, because there’s no country in Europe that doesn’t have a similar story about it. Russia has been purposefully and systematically disseminating fake news memes for at least a couple of years.  It’s a long interview and you may not have time, but she gets on to how fake news is made to flow from about 36 minutes in. She makes a lot of important, and worrying, statements about the impact these disinformation campaigns have already had in Europe, for example in support of the far right in Hungary.  I’ve previously recommended Applebaum as an expert worth following and here I can’t do better than quote from the Sam Harris podcast. It adds a lot more substance to the fake news discussion I reported on in a previous post.

She warns that [in relation to the Trump campaign’s Russian links] the danger is that the FBI investigation won’t find a smoking gun and then people will say it’s all right. But we don’t need a smoking gun. We can see it.

It’s a pattern of politics that they [the Putin regime] follow…they seek out extreme groups and movements that they support sometimes quite openly…they support the far right in Hungary…sometimes it’s with money, sometimes contacts, social media campaigns…there’s a pattern to it, it works the same in every country. They adjust it depending on the politics. Sometimes they support the far left, sometimes the far right. Sometimes they support business people. But in every country they do the same thing…done deliberately…they create the themes then an enormous network of trolls and bots…repeated on dozens and dozens of conspiracy sites…not an accident…a deliberate tactic

look at what happens in other countries, then you can see that it’s a pattern…For Americans it’s new, it’s not new for Europeans…most of the time, Russian interference in foreign elections takes the same forms that it did in the United States. Russian websites operating openly (Russia Today, Sputnik) or under other names spin out false rumors and conspiracy theories; then trolls and bots, either Russian or domestic, systematically spread them.

tree 2 copy

In another article written late last year, Applebaum gives a few examples of Russian interference on behalf of Le Pen in the French elections, and also describes an experience of her own when she became a target .

it was eye-opening to watch the stories move through a well-oiled system, one that had been constructed for exactly this sort of purpose…WikiLeaks — out of the blue — tweeted one of the articles to its 4 million followers… As I watched the story move around the Web, I saw how the worlds of fake websites and fake news exist to reinforce one another and give falsehood credence. Many of the websites quoted not the original, dodgy source, but one another…many of their “followers” (maybe even most of them) are bots — bits of computer code that can be programmed to imitate human social media accounts and told to pass on particular stories

tree 1

In a future post I’ll be looking at what’s being done in response and what’s recommended that we do ourselves so we don’t end up being fooled or worse, unwitting colluders.

 

 

A fake news flow tracker

You Twitfaces Round Two

Twitter, Facebook and Google (collectively known here as You Twitfaces, with acknowledgements to Benny A) have all been attacked in the old media lately.  There have been so many critical news stories and comment pieces that it’s hard to keep track of what the real issues are, who’s involved and what might happen next. What’s behind all this, especially as none of the issues is even new? Here I’m disentangling the four – as I think there are four – key concerns, puting them into some kind of timeline and pointing to a few useful sources or experts worth checking out. I’ve already flagged up some in previous posts on social media and Internet research here and here. Along the way I’ll be explaining my trip to the Royal Institution. We’re looking at hate speech, fake news, data collection and surveillance, and content theft.

First up, the Home Affairs committee, a select committee of the UK House of Commons, started investigating hate crimes and social media last year prompted largely by the murder of Jo Cox MP. The committee is now suspended because an election was called, so they had to rush out their report Hate crime: abuse, hate and extremism onlineIt was published on May 1st. As these things go it is readable and not incredibly long, and if you look it up you will get a flavour of how angry the cross-party MPs were with the corporate spokesmen (yes, they were all men) and their feeble excuses.  Witnesses who gave evidence, both individuals and organisations, are listed separately with links to the written evidence. Oral evidence is minuted. The Social Data Science Lab at Cardiff University submitted detailed well-grounded evidence based on large-scale research projects into hate crime in the UK.  They noted that most online hate speech research has been conducted on social media platforms (mainly Twitter and Facebook) but there’s a need to examine hate on emerging platforms and online gaming. They recommended more research into the relationship between hate speech online and offline hate crime.

The corporate spokesmen questioned by the committee were Peter Barron, Vice President, Communications and Public Affairs, for Google,   Simon Milner, Policy Director for the UK, Middle East and Africa, for Facebook, and Nick Pickles, Senior Public Policy Manager, Twitter. The report is scathing about their answers and evidence (available in the minutes for 14 March , and it’s eye-opening). They can’t defend the examples of hate speech, child pornography or illegal extremist content they are presented with, and don’t try. Instead they fall back on their community ‘standards’, relying on users to flag content, and on trying to improve their algorithms. They refuse to say how many human moderators they employ or how much they spend on making sure content that is illegal or violates their terms gets removed. The committee points out that when content breaches copyright, it seems to get removed much faster so they obviously have the means and they certainly have the money. A flavour of the report as a word cloud:

CommitteeReport

There are many extraordinary, wriggly exchanges. Peter Barron tries to defend Google allowing David Duke’s antisemitic videos to stay up on YouTube (which Google owns). He says their own lawyers decided the content wasn’t actually illegal. The chair points out that their own guidelines say they don’t support hate speech. Barron then tries to fall back on an alternative free expression argument which shreds any idea that their community standards mean anything.  In other exchanges Nick Pickles tries to defend Twitter’s failure to deal with volumes of abusive racist tweets directed at MPs. Simon Milner tries to defend holocaust denial material on Facebook on the grounds that it attracts a few counter-comments. The MPs make their disgust pretty plain, especially when they finally force the spokesmen to admit that whether they want to or not, their companies do in fact make money out of malicious, hateful and even illegal content.

The constant excuse that they rely on users to report abuses doesn’t go down well with the MPs. In the report’s words, ‘They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. One MP tells them he would be ashamed to make his money the way they do. They really don’t like being told that and they seem to think that saying they work with ‘trusted flaggers’ such as specialist police units will work in their favour. That backfires. The report points out that if these social media companies earning billions in annual operating profits are relying on a publicly funded service to do their work, they should repay the cost.

So what does the report recommend? Briefly:

  • that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse.
  • Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.
  • Government should consult on adopting similar principles online to those used for policing football matches —for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves.
  • social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. Government should consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
  • social media companies should review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.
  • social media companies should publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future.  If they refuse, the Government should consult on requiring them to do so.
  • Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. They should use their existing technology to help them abide by the law and meet their community standards.
  • Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.

After the election, the committee will move on to considering fake news. In a BBC Panorama programme broadcast on May 8 you can see clips of the committee hearings and comment from MPs, ex-Facebook employees and others. The interview with Simon Milner from Facebook is excruciating: he keeps repeating the same bland unconvincing statements rather than answer the questions. A disaffected ex-colleague of Mark Zuckerberg, Antonio Garcia Martinez, comes out with what he thinks Facebook really thinks about all the fuss (my transcript): You know what’s naive? The thought that a European bureaucrat who’s never managed so much as a blog could somehow in any way get up to speed or even attempt to regulate the algorithm that drives literally a quarter of the Internet in the entire world. That’s never going to happen in any realistic way. A more fully developed version of this arrogant claim and what it’s based on can be read here. It amounts to little more than We’re super-rich, you losers, so we’re above the law. So yes, the social media outfits have lots of data on us, can manoevre around different legal restrictions because of their global reach, point to their supreme algorithms and fall back on defending free speech. But ultimately they are basically just vast advertising companies, and what does hurt them is advertisers cancelling contracts. That did start to happen after a few recent revelations. Their free speech argument doesn’t work any longer once it is pointed out that they are making money out of racism, terrorist recruitment sites and child pornography, by running ads alongside such nasty content,  and are also tainting reputable organisations and businesses by linking their ads to it. Free speech has nothing to do with that.

Facebook had to deal with charges that they allowed fake news to influence the US presidential elections. They responded first with denials,  from Zuckerberg personally, but there was too much evidence to ignore so they’ve moved on to Plan B, blaming the users. It’s our fault. On May 8, Facebook ran ads in the old media telling us what to do, following on from news stories that they’re hiring 3,000 more people as content checkers and are yet again tweaking their algorithms. It should all have been great PR but their advice on spotting fake news was unaccountably mocked. Here’s another word cloud based on Facebook’s kindly advice to us all.

FacebookAdvice

How much of a problem is fake news? Last week there was a debate at the Royal Institution,  Demockery and the media in a ‘post-factual’ age, hosted by Sussex University, with a panel of journalists from various organisations. I guess the panellists were meant to represent a range of media as well as provide gender balance and as seems to be too often the case, the balance didn’t work. Kerry-Anne Mendoza from The Canary, and Ella Whelan from spiked-online, were under-informed and relied too much on assertion and opinion. Neil Breakwell from Vice News, a former Newsnight deputy editor, and Ivor Gaber, Professor of Journalism and a former political journalist, simply knew a lot more and so were more interesting. This unbalanced ‘balance’ happens too often and I’m going to call it out every time.

Asked about fake news, Mendoza didn’t even want to use the term as she claimed it had been tainted by Trump. (She also claimed that The Canary was a left-wing alternative for an audience who would otherwise be reading right-wing tabloids. The other panellists thought that claim about its readership was pretty unlikely, and she had no evidence for who its readers actually are.)   Whelan in true spiked contrarian style disputed there was even an issue because it would be patronising to suggest anyone was influenced by fake news. Gaber made the most serious and telling point, that it’s not about quibbling over truth or accuracy but it’s about intent.  That’s the real reason we should care about fake news. Unfortunately the discussion as a whole didn’t pursue that enough, surprisingly given the salience of current investigations into attempts by the far right and Putin’s regime to interfere with and disrupt democratic elections in the USA and France (at the very least). Mendoza, Breakwell and Whelan seemed mostly concerned to establish their own credentials as reliable sources with good editorial fact-checking practices. There was an intriguing moment when they all ganged up against Buzzfeed for putting last year’s leaked allegations about Trump and Russia online without any corroboration. The chair, Clive Myrie, stopped that as Buzzfeed weren’t present. Given the event’s own blurb, which included this quotation from the World Economic Forum: The global risk of massive digital misinformation sits at the centre of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of governance, the discussion hardly matched up.

Now let’s have another word cloud. This one is from Facebook’s own document published at the end of April, Facebook and Information Operations v1. We can probably take it that the timing of this document, days before the select committee report, was not much of a coincidence. Carrying videos of actual crimes including murder on Facebook Live hasn’t helped their case much either.

Facebook cloud

(I’m making word clouds partly to lighten this long post with some images but I’m also finding they do help show up different emphases in the sources I’ve cited.) The document explains what they are trying to do, and it’s all so bland and  minimal you can’t see why they are not already doing that stuff. But Facebook really, really doesn’t want to get into censoring (as they would see it) online content, even though that is what they do some of the time. Their rhetoric uses the language of ‘community’ and ‘shared responsibility’. Here’s what some other commentators have to say about that.

Sam Levin: Zuckerberg has refused to acknowledge that Facebook is a publisher or media company, instead sticking to the labels of “tech company” and “platform”.

Jane Kirtley: It’s almost like these are kids playing with a toy above their age bracket. We surely need something more than just algorithms. We need people who are sentient beings who will actually stop and analyze these things.

Jim Newton: Facebook wants to have the responsibility of a publisher but also to be seen as a neutral carrier of information that is not in the position of making news judgments. I don’t know how they are going to be able to navigate that in the long term.

Edward Wasserman: If they are in the news business, which they are, then they have to get into the world of editorial judgment.

Jonathan Taplin: Facebook and Google are like black boxes. Nobody understands the algorithms except the people inside.

Paradoxically, the quotations above were to do with Facebook actually censoring, rather than failing to censor content. The row blew up last year when Facebook took down the famous photograph of a Vietnamese child running naked in terror after a napalm attack on her village. In Taplin’s words, ‘It was probably [censored by] a human who is 22 years old and has no fucking idea about the importance of the [napalm] picture’. After this particular row Facebook stopped using human censors and began relying on its algorithms which allowed through floods of fake news content. Because they collect so much data on users and deploy it via their famous, but secret, algorithms, those streams of fake news were also targeted. So it’s easy to see that now Facebook doesn’t only not know what it’s doing, it doesn’t know how to defend what it’s doing. Easier to (1) blame the users, although the word they would use is ‘educate’, (2) say we’re so big you can’t touch us and anyway it’s far too complicated.

Taplin, quoted above, has a new book out that’s just been well reviewed, Move fast and break things. I’d like to read it but as it has a critique of Amazon as well as the social media companies, I’m going to have to wait.

Taplin is particularly concerned about the effect huge companies such as Google and Amazon have had on smaller businesses around the world that they’ve crushed or swallowed up, and their wholesale theft of the creative products of others.  I should own up here that the cartoon above is from xkcd.

What’s to be done, and what can any of us do? Lawmakers despite what Martinez has to say could start to act if they feel their own safety is threatened, or elections are being heavily influenced. The far right social media connection to the murder of an MP, and evidence coming from France and the USA about deliberate interference, can’t be ignored completely. Anne Applebaum (one of the experts I recommended here) was the victim of a smear campaign after she wrote about Russia’s actions in Ukraine, and has described how the fake news channels work and how effective they can be.  Over at the Oxford Internet Institute (OII), there’s a research project on Algorithms, Computational Propaganda, and Digital Politics  tracking political bots. It has scrutinised what happened with the French presidential elections as well as in the USA.  Philip Howard, the new Pofessor of Internet Studies at the OII, and Helen Margett, the Institute’s Director, have both complained recently that the giant social media companies, who have collected so much data on us, are not releasing it to independent academics so that it can be properly researched and we can get a handle on what’s going on. Howard even calls it a sin.

Social media companies are taking heat for influencing the outcomes of the U.S. presidential election and Brexit referendum by allowing fake news, misinformation campaigns and hate speech to spread.

But Facebook and Twitter’s real sin was an act of omission: they failed to contribute to the data that democracy needs to thrive. While sitting on huge troves of information about public opinion and voter intent, social media firms watched as U.S. and UK pollsters, journalists, politicians and civil society groups made bad projections and poor decisions with the wrong information.

The data these companies collect, for example, could have told us in real-time whether fake news was having an impact on voters. Information garnered from social media platforms could have boosted voter turnout as citizens realized the race was closer than the polls showed – and that their votes really would matter. Instead, these companies let the United States and UK tumble into a democratic deficit, with political institutions starved of quality data on public opinion.

Margetts, in Political Turbulence, makes the point that such research is still feasible now but with the advent of the Internet of Things it could become completely impossible in future, and politically we would be moving into chaotic times.

After all this it might be a relief to end with a bit of optimism, so here are a handful of possible reasons. The political bots research unit doesn’t think attempts to influence the French elections really worked this time. Ivor Gaber reckons that because the UK (print) news media is so biased and sensationalist, fake news has less influence here because readers don’t believe what they read anyway. Taplin reckons that the younger digital tycoons – he’s thinking of Zuckerberg – care about their public images enough to want to make changes so they are not seen as evil-doers. (Google famously had ‘Do no evil’ as their original mission statement but that was last century.) Matthew Williams and Pete Burnap, of the Social Data Science Lab mentioned above, gave the select committee some evidence that other users confronting racists on Twitter did seem to have an effect. I’ll quote it in full, as it’s couched as useful advice.

Extreme posts are often met with disagreement, insults, and counter-speech campaigns. Combating hate speech with counter speech has some advantages over government and police responses: it more rapid, more adaptable to the situation and pervasive; it can be used by any internet user (e.g. members of the public, charities, the media, the police); and it draws on nodal governance and responsibilsation trends currently prominent in the wider criminal justice system.  The following typology of counter hate speech was identified:

Attribution of Prejudice

e.g “Shame on #EDL racists for taking advantage of this situation”

Claims making and appeals to reason

e.g. “This has nothing to do with Islam, not all Muslims are terrorists!”

Request for information and evidence

e.g. “How does this have anything to do with the colour of someone’s skin??”

Insults

e.g. “There are some cowardly racists out there!”

Initial evidence from ongoing experiments with social media data show that counter speech is effective in stemming the length of hateful threads when multiple unique counter speech contributors engage with the hate speech producer.  However, not all counter speech is productive, and evidence shows that individuals that use insults against hate speech producers often inflame the situation, resulting in the production of further hate speech.

There’s a lot to keep an eye on. I’ll be catching up with You Twitfaces again in a while. Meanwhile here’s Google shyly hiding again.
google6

 

You Twitfaces Round Two

Faraday and the Elephant

If you’ve ever been south of the river in London you’ll probably have seen the Faraday Memorial, even if you didn’t realise it. The Memorial is the big steel cube in the middle of what used to be a traffic roundabout at Elephant and Castle. The area around it is now more pedestrian-friendly. It looks like this.

Faraday memorial

There’s an explanation on a sign beside it. I’ve seen people reading the sign, unlike the days when traffic stopped anyone getting near. Michael Faraday, scientist and inventor, was a local boy. He came from a poor family and didn’t have access to much education, but took it on himself to go to Humphry Davy’s lectures at the Royal Institution. Faraday had been apprenticed to a bookbinder, so he carefully wrote out his notes from the lectures, bound them beautifully and presented them to Davy. That’s how he got his first break. The sign has a very brief summary of Faraday’s life and career, and a little about the memorial and its architect. The memorial is also, appropriately, an electricity substation for the Northern and Bakerloo lines.

Faraday sign at Elephant

If you want to get much better insight into Faraday’s work, I recommend the Faraday Museum in the basement of the Royal Institution. It is small and a little dingy but Faraday’s laboratory has been preserved and recreated, and there are a lot of extraordinary exhibits. Possibly my favourite ever museum curator’s blurb reads “After discovering electro-magnetic induction, Faraday took a holiday in Hastings.” [Pause badly needed there, if only for comic timing.] It continues: “He then returned to his laboratory and created another world-changing invention: the first electric generator.”  It makes me feel I should try a holiday in Hastings this summer.

Here’s a faF5ce you might well recognise, although not at this scale. The museum has a blown-up image of a £20 banknote across an entire wall. The note also featured a drawing of the famous institution lectures.

F4

You can see one of the earliest ever electric batteries in the museum, given to Faraday by its inventor Alexander Volta in 1814. There is also equipment made by Faraday himself, as he had to make most of his kit from scratch. Insulation didn’t yet exist so in order to make a coil, he and his assistants had to wrap string round wire. It could take a week to make an electric coil like this very early one.

F2

Faraday’s glassmaking experiments, working close-up to the furnace without adequate protection, probably caused some of his health problems. He was trying to make very specialist vessels like the glass ‘egg’ he wanted to use to create vacuums he would then fill with different gases. His experiments in passing an electric current through a variety of gases and metals led to the discovery of spectroscopy, which in turn is the basis of a lot of astrophysics as well as earthly physics. Faraday didn’t only invent electrodes. He also came up with the word. We owe him for some of our language as well as for his discoveries and inventions, and for being a public educator.

There is another, much tinier Faraday museum in London at Trinity Buoy Wharf. I’ll go there one day. In my next post I will also explain what I was really doing at the Royal Institution. Meanwhile, here’s a less interesting but maybe better known public artwork from Elephant and Castle although to be fair, it does feature another London elephant.

Elephant Elephant

 

 

Faraday and the Elephant

New living metaphor seen in the wild

Hope you all caught this bit of news from the local elections last week. The Liberal Democrats took overall controll of Northumberland County Council although the result was at first a dead heat in terms of seats – 33 to the Conservatives, and 33 to their opponents combined. The South Blyth ward was still tied after two recounts, so the result was decided by drawing straws. The Conservative candidate drew the short straw. Literally. It was even filmed.

[Recap here on what I mean by living metaphors – and if you catch any  new ones please tell. I still think they’re rarely found in the wild.]

New living metaphor seen in the wild